A long-standing goal of artificial intelligence is an algorithm that learns, tabula rasa, superhuman proficiency in challenging domains. Recently, AlphaGo became the first program to defeat a world champion in the game of Go. The tree search in AlphaGo evaluated positions and selected moves using deep neural networks. These neural networks were trained by supervised learning from human expert moves, and by reinforcement learning from self-play. Here we introduce an algorithm based solely on reinforcement learning, without human data, guidance or domain knowledge beyond game rules. AlphaGo becomes its own teacher: a neural network is trained to predict AlphaGo’s own move selections and also the winner of AlphaGo’s games. This neural network improves the strength of the tree search, resulting in higher quality move selection and stronger self-play in the next iteration. Starting tabula rasa, our new program AlphaGo Zero achieved superhuman performance, winning 100–0 against the previously published, champion-defeating AlphaGo.
It is able to do this by using a novel form of reinforcement learning, in which AlphaGo Zero becomes its own teacher. The system starts off with a neural network that knows nothing about the game of Go. It then plays games against itself, by combining this neural network with a powerful search algorithm. As it plays, the neural network is tuned and updated to predict moves, as well as the eventual winner of the games.
This updated neural network is then recombined with the search algorithm to create a new, stronger version of AlphaGo Zero, and the process begins again. In each iteration, the performance of the system improves by a small amount, and the quality of the self-play games increases, leading to more and more accurate neural networks and ever stronger versions of AlphaGo Zero.
After just three days of self-play training, AlphaGo Zero emphatically defeated the previously published version of AlphaGo – which had itself defeated 18-time world champion Lee Sedol – by 100 games to 0. After 40 days of self training, AlphaGo Zero became even stronger, outperforming the version of AlphaGo known as “Master”, which has defeated the world’s best players and world number one Ke Jie.
The new AlphaGo Zero is not impressive just because it uses no data set to become the world leader at what it does. It’s impressive also because it achieves the goal at a pace no human will ever be able to match.
We’ve evolved a population of 1050 agents of different anatomies (Ant, Bug, Spider), policies (MLP, LSTM), and adaptation strategies (PPO-tracking, RL^2, meta-updates) for 10 epochs. Initially, we had an equal number of agents of each type. Every epoch, we randomly matched 1000 pairs of agents and made them compete and adapt in multi-round games against each other. The agents that lost disappeared from the population, while the winners replicated themselves.
Summary: After a few epochs of evolution, Spiders, being the weakest, disappeared, the subpopulation of Bugs more than doubled, the Ants stayed the same. Importantly, the agents with meta-learned adaptation strategies end up dominating the population.
OpenAI has developed a “learning to learn” (or meta-learning) framework that allows an AI agent to continuously adapt to a dynamic environment, at least in certain conditions. The environment is dynamic for a number of reasons, including the fact that opponents are learning as well.
AI agents equipped with the meta-learning framework win more fights against their opponents and eventually dominate the environment. Be sure to watch the last video to see the effect.
The meta-learning framework gives the selected AI agents the capability to predict and anticipate the changes in the environment and adapt faster than the AI agents that only learn from direct experience.
We know that the neocortex is a prediction machine and that human intelligence amounts to the capability to anticipate and adapt. This research is a key step towards artificial general intelligence.
Ability to continuously learn and adapt from limited experience in nonstationary environments is an important milestone on the path towards general intelligence. In this paper, we cast the problem of continuous adaptation into the learning-to-learn framework. We develop a simple gradient-based meta-learning algorithm suitable for adaptation in dynamically changing and adversarial scenarios. Additionally, we design a new multi-agent competitive environment, RoboSumo, and define iterated adaptation games for testing various aspects of continuous adaptation strategies. We demonstrate that meta-learning enables significantly more efficient adaptation than reactive baselines in the few-shot regime. Our experiments with a population of agents that learn and compete suggest that meta-learners are the fittest.
The global medical bionic implants and exoskeletons market stood at U$ 454.5 Mn in 2016. It is expected to expand at a CAGR of 7.5% during the period 2017-2027 to reach U$ 1,001.4 Mn. Factors such as rising amputation rates, diabetes, arthritis, trauma cases and expanding ageing demographics have led to a higher number of bionic implants and exoskeletons procedures. According to National Center for Health Statistics, 185,000 new amputations are consistently being performed in the U.S every year. Advancement in new robotics technology (mind-controlled bionic limbs & exoskeletons) coupled with 3D printing is also positively impacting the growth of the market.
This is just the market for addressing a disability or impairment (aka “fixing”). There will be a market for intentional augmentation (aka “improving”).
the biohacker claims he’s the first person trying to modify his own genome with the groundbreaking gene-editing technology known as CRISPR. And he’s providing the world with the means to do it, too, by posting a “DIY Human CRISPR Guide” online and selling $20 DNA that promotes muscle growth.
But editing your DNA isn’t as simple as following step-by-step advice. Scientists say that injecting yourself with a gene for muscle growth, as Zayner’s done, won’t in fact pump up your arms. Zayner himself admits that his experiments over the last year haven’t visibly changed his body. There are safety risks, too, experts say: People could infect themselves, or induce an inflammatory reaction.
But to Zayner, whether or not the experiment actually works is besides the point. What he’s trying to demonstrate, Zayner told BuzzFeed News, is that cutting-edge biology tools like CRISPR should be available to people to do as they wish, and not be controlled by academics and pharmaceutical companies.
The point is not if it’s legit or not, effective or not, legal or not. The point is that there is a growing community of humans that is experimenting, tinkering, and taking risks with their bodies, trying to achieve things that the mainstream audience considers horrifying, impossible, out of reach. This community doesn’t have much credibility today, just like IT security hackers didn’t have much credibility in the early days of the Internet. Today, hacking communities are recruiting pools by top military organizations in the world, and hacking conferences are a prime stage for the biggest software and hardware vendors on the market.
Lost in a sea of pseudo-scientists, impostors, scammers, and amateur wannabe, there are a few serious, determined, fearless explorers of the human body. They won’t look credible until they will.
Cancer incidence increase has multiple aetiologies. Mutant alleles accumulation in populations may be one of them due to strong heritability of many cancers. The opportunity for the operation of natural selection has decreased in the past ~150 years because of reduction in mortality and fertility. Mutation-selection balance may have been disturbed in this process and genes providing background for some cancers may have been accumulating in human gene pools. Worldwide, based on the WHO statistics for 173 countries the index of the opportunity for selection is strongly inversely correlated with cancer incidence in peoples aged 0–49 years and in people of all ages. This relationship remains significant when gross domestic product per capita (GDP), life expectancy of older people (e50), obesity, physical inactivity, smoking and urbanization are kept statistically constant for fifteen (15) of twenty-seven (27) individual cancers incidence rates. Twelve (12) cancers which are not correlated with relaxed natural selection after considering the six potential confounders are largely attributable to external causes like viruses and toxins. Ratios of the average cancer incidence rates of the 10 countries with lowest opportunities for selection to the average cancer incidence rates of the 10 countries with highest opportunities for selection are 2.3 (all cancers at all ages), 2.4 (all cancers in 0–49 years age group), 5.7 (average ratios of strongly genetically based cancers) and 2.1 (average ratios of cancers with less genetic background).
Because of the quality of our healthcare in western society, we have almost removed natural selection as the “janitor of the gene pool”.
‘Natural selection in the past had an ample opportunity to eliminate defective genes introduced by mutations.
He said: ‘However, natural selection has been significantly reduced in the past 100 to 150 years, and the direct consequence of this process is that nearly every individual born into a population can pass genes to the next generation, while some 150 years ago, only 50 per cent or less of individuals had this chance.
‘Unfortunately, the accumulation of genetic mutations over time and across multiple generations is like a delayed death sentence.
‘Allowing more people with cancer genes [to] survive may boost cancer gene accumulation. Patients who survive it will have a chance to pass this predisposition to the next generation.
Rather than just removing cancers, the researchers add patients should undergo genetic engineering that ‘turns off’ their tumour-causing genes.
Professor Henneberg added: ‘Assuming that the increasing genetic load underlies cancer incidence as one of the contributing factors, the only way to reduce it remains genetic engineering- repair of defective portions of the DNA or their blockage by methylation and similar approaches.
‘These techniques, though theoretically possible, are not yet practically available.
‘They will, however, need to be developed as they provide the only human-made alternative to the disappearing action of natural selection’.
Fascinating perspective and research. I think we are totally unequipped to understand the long-term implications of how we are changing the human body.
Gene therapy typically uses an engineered virus to administer a patient with a faulty gene with a corrected version. Rather than simply responding to the symptoms of the condition in question, it attempts to make changes to the individual’s genetic make-up in order to solve the problem at its root.
Luxturna fixes a mutation in a gene known as RPE65, which is responsible for telling the body how to produce a protein that’s essential for normal eyesight. It introduces billions of engineered virus particles bearing a corrected version of the gene to the retinal cell, via a quick injection to the eyes.
It’s not an outright cure, and it doesn’t give recipients full 20/20 vision. There’s currently no data on how long its effects last, so there’s a chance that patients’ sight might begin to recede once again over time.
Cost is also a major factor in how accessible it is. Two of the treatment’s biggest competitors, Strimvelis and Kymriah, cost around $700,000 and $475,000 respectively.
It’s a lot of money to try something that is unlikely permanent, so at the moment this remains for very few privileged humans. But what an incredible step forward.
sensors that can measure strain, and thus bodily motions, in real time have become a hot commodity. But figuring out an optical sensor that can stand up to large strains, such as those across a bent elbow or a clenched fist, has proved a tough problem to crack.
A team of researchers at Tsinghua University, China, led by OSA member Changxi Yang, now believes it’s come up with an answer: a fiber optic sensor made of a silicone polymer that can stand up to, and detect, elongations as great as 100 percent—and effortlessly snap back to an unstrained state for repeated use
fibers made of polydimethylsiloxane (PDMS), a soft, stretchable silicone elastomer that’s become a common substrate in stretchable electronics. The team developed the fiber by cooking up a liquid silicone solution in tube-shaped molds at 80 °C, and doping the fiber mix with Rhodamine B dye molecules, whose light absorption is wavelength dependent. Because stretching of the fiber will shrink its diameter, leaving the total volume invariant, a fiber extension has the effect of increasing the optical length for light passing through the dye-doped fiber. That increase, in turn, can be read in the attenuation of the fiber’s transmission spectra, and tied to the amount of strain in the fiber.
This could lead to a new generation of “smart clothing”, especially for sport and medical applications.
In the past, eye-tracking technology has had a bad press. “Using eye blink or dwell for cockpit control selection led to the so called ‘Midas touch’ phenomenon, where people could inadvertently switch things on or off just by looking,” says Ms Page. But combine a gaze with a second control and the possibilities are vast. “Consider the mouse, a genius piece of technology. Three buttons but great versatility.” Pilots, she says, could activate drop-down menus with their gaze, and confirm their command with the click of a button at their fingers.
In future, eye-tracking might be used to assess a pilot’s physiological state. “There’s evidence parameters about the eye can tell us about an individual’s cognitive workload,” says Ms Page.
Eye-tracking technology could also monitor how quickly a pilot is learning the ropes, allowing training to be better tailored. “Instead of delivering a blanket 40 hours to everyone, for instance, you could cut training for those whose eye data suggest they are monitoring the correct information and have an acceptable workload level, and allow longer for those who need it.”
- Obviously, human augmentation is initially focusing on vision, but that’s just the beginning. Our brain seems to be capable of processing any input, extract a meaningful pattern out of it, and use to improve our understanding of the world. I expect the auditory system to be the next AR focus. I’d assume augmented earing would be especially useful in ground combat.
- We are visual creatures so we are naturally inclined to assume that the large portion of our neocortex dedicated to image processing will be able to deal with even more data coming in. What if it’s a wrong assumption?
Awareness of bias in algorithms is growing among scholars and users of algorithmic systems. But what can we observe about how users discover and behave around such biases?
We used a cross-platform audit technique that analyzed online ratings of 803 hotels across three hotel rating platforms and found that one site’s algorithmic rating system biased ratings, particularly low-to-medium quality hotels, significantly higher than others (up to 37%).
Analyzing reviews of 162 users who independently discovered this bias, we seek to understand if, how, and in what ways users perceive and manage this bias. Users changed the typical ways they used a review on a hotel rating platform to instead discuss the rating system itself and raise other users’ awareness of the rating bias. This raising of awareness included practices like efforts to reverse engineer the rating algorithm, efforts to correct the bias, and demonstrations of broken trust.
We conclude with a discussion of how such behavior patterns might inform design approaches that anticipate unexpected bias and provide reliable means for meaningful bias discovery and response.
Biointegrated sensors can address various challenges in medicine by transmitting a wide variety of biological signals. A tempting possibility that has not been explored before is whether we can take advantage of genome editing technology to transform a small portion of endogenous tissue into an intrinsic and long-lasting sensor of physiological signals. The human skin and epidermal stem cells have several unique advantages, making them particularly suitable for genetic engineering and applications in vivo. In this report, we took advantage of a novel platform for manipulation and transplantation of epidermal stem cells, and presented the key evidence that genome-edited skin stem cells can be exploited for continuous monitoring of blood glucose level in vivo. Additionally, by advanced design of genome editing, we developed an autologous skin graft that can sense glucose level and deliver therapeutic proteins for diabetes treatment. Our results revealed the clinical potential for skin somatic gene therapy.
To make their biological invention, Wu and team first collected from mice some of the stem cells whose job it is to make new skin. Next, they used the gene-editing technique CRISPR to create their built-in glucose detector. That involved adding a gene from E. coli bacteria whose product is a protein that sticks to sugar molecules.
Next, they added DNA that produces two fluorescent molecules. That way, when the E. coli protein sticks to sugar and changes shape, it moves the fluorescent molecules closer or further apart—generating a signal that Wu’s team could see using a microscope.
All that was done in a lab dish—so next the team tested whether the glucose-sensing cells could be incorporated into a mouse’s body by grafting the engineered skin patches onto their backs. When mice who were left hungry were suddenly given a big dose of sugar, Wu says, the cells reacted within 30 seconds. Measuring glucose this way was just as accurate as a blood test, which they also tried.
A cognitive map has long been the dominant metaphor for hippocampal function, embracing the idea that place cells encode a geometric representation of space. However, evidence for predictive coding, reward sensitivity and policy dependence in place cells suggests that the representation is not purely spatial. We approach this puzzle from a reinforcement learning perspective: what kind of spatial representation is most useful for maximizing future reward? We show that the answer takes the form of a predictive representation. This representation captures many aspects of place cell responses that fall outside the traditional view of a cognitive map. Furthermore, we argue that entorhinal grid cells encode a low-dimensionality basis set for the predictive representation, useful for suppressing noise in predictions and extracting multiscale structure for hierarchical planning.
Our insights were derived from reinforcement learning, the subdiscipline of AI research that focuses on systems that learn by trial and error. The key computational idea we drew on is that to estimate future reward, an agent must first estimate how much immediate reward it expects to receive in each state, and then weight this expected reward by how often it expects to visit that state in the future. By summing up this weighted reward across all possible states, the agent obtains an estimate of future reward.
Similarly, we argue that the hippocampus represents every situation – or state – in terms of the future states which it predicts. For example, if you are leaving work (your current state) your hippocampus might represent this by predicting that you will likely soon be on your commute, picking up your kids from school or, more distantly, at home. By representing each current state in terms of its anticipated successor states, the hippocampus conveys a compact summary of future events, known formally as the “successor representation”. We suggest that this specific form of predictive map allows the brain to adapt rapidly in environments with changing rewards, but without having to run expensive simulations of the future.
I wonder what Jeff Hawkins thinks about this new theory.
RNA has important and diverse roles in biology, but molecular tools to manipulate and measure it are limited. For example, RNA interference can efficiently knockdown RNAs, but it is prone to off-target effects, and visualizing RNAs typically relies on the introduction of exogenous tags. Here we demonstrate that the class 2 type VI RNA-guided RNA-targeting CRISPR–Cas effector Cas13a (previously known as C2c2) can be engineered for mammalian cell RNA knockdown and binding.
After initial screening of 15 orthologues, we identified Cas13a from Leptotrichia wadei (LwaCas13a) as the most effective in an interference assay in Escherichia coli. LwaCas13a can be heterologously expressed in mammalian and plant cells for targeted knockdown of either reporter or endogenous transcripts with comparable levels of knockdown as RNA interference and improved specificity. Catalytically inactive LwaCas13a maintains targeted RNA binding activity, which we leveraged for programmable tracking of transcripts in live cells.
Our results establish CRISPR–Cas13a as a flexible platform for studying RNA in mammalian cells and therapeutic development.
Microsatellite repeat expansions in DNA produce pathogenic RNA species that cause dominantly inherited diseases such as myotonic dystrophy type 1 and 2 (DM1/2), Huntington’s disease, and C9orf72-linked amyotrophic lateral sclerosis (C9-ALS). Means to target these repetitive RNAs are required for diagnostic and therapeutic purposes. Here, we describe the development of a programmable CRISPR system capable of specifically visualizing and eliminating these toxic RNAs. We observe specific targeting and efficient elimination of microsatellite repeat expansion RNAs both when exogenously expressed and in patient cells. Importantly, RNA-targeting Cas9 (RCas9) reverses hallmark features of disease including elimination of RNA foci among all conditions studied (DM1, DM2, C9-ALS, polyglutamine diseases), reduction of polyglutamine protein products, relocalization of repeat-bound proteins to resemble healthy controls, and efficient reversal of DM1-associated splicing abnormalities in patient myotubes. Finally, we report a truncated RCas9 system compatible with adeno-associated viral packaging. This effort highlights the potential of RCas9 for human therapeutics.
Normally, CRISPR uses a slicing protein called Cas9 that recognizes and chops up the desired DNA, eliminating a mutated gene. Yeo and his team modified Cas9 to leave DNA alone and instead bind to and cut problematic RNA.
When tested in the lab, Yeo’s CRISPR tool obliterated 95 percent or more of these RNA knots in cells harboring Huntington’s disease and a type of ALS.
The researchers also tested the approach on a form of inherited muscular dystrophy, called myotonic dystrophy. They were able to eliminate 95 percent of faulty RNAs in muscle cells taken from patients. After they applied CRISPR, the once-diseased cells resembled healthy ones. Yeo thinks more than 20 genetic diseases that are caused by toxic RNA repeats could potentially be treated this way.
Knocking down these RNAs is only temporary, though. RNA constantly regenerates, so its level in cells eventually rebounds back to normal after a few days to a week.
So Yeo is designing a virus capsule to carry the CRISPR machinery to the right cells. These viral delivery shuttles would allow the Cas protein to stick around in a person’s cells longer—ideally for years, turning Cas into a mini-arsenal to keep unruly RNA at bay.
Despite their fundamental biological and clinical importance, the molecular mechanisms that regulate the first cell fate decisions in the human embryo are not well understood. Here we use CRISPR–Cas9-mediated genome editing to investigate the function of the pluripotency transcription factor OCT4 during human embryogenesis. We identified an efficient OCT4-targeting guide RNA using an inducible human embryonic stem cell-based system and microinjection of mouse zygotes. Using these refined methods, we efficiently and specifically targeted the gene encoding OCT4 (POU5F1) in diploid human zygotes and found that blastocyst development was compromised. Transcriptomics analysis revealed that, in POU5F1-null cells, gene expression was downregulated not only for extra-embryonic trophectoderm genes, such as CDX2, but also for regulators of the pluripotent epiblast, including NANOG. By contrast, Pou5f1-null mouse embryos maintained the expression of orthologous genes, and blastocyst development was established, but maintenance was compromised. We conclude that CRISPR–Cas9-mediated genome editing is a powerful method for investigating gene function in the context of human development.
CRISPR Cas9 can modify or snip out genetic defects thought to contribute to miscarriage, but until now it wasn’t clear why some embryos continued to form into a fetus and others did not.
British scientists conducting the study found that a certain human genetic marker called OTC4 played an important role in the formation and development in the early stages of embryonic development. The scientists used CRISPR Cas9 to knock out this important gene in days-old human embryos and found that without it, these embryos ceased to attach or grow properly.
The findings could not only help us better understand why some women suffer more miscarriages than others, but it could also potentially greatly increase the rate of successful in vitro fertilization (IVF) procedures.
Chronic wounds do not heal in an orderly fashion in part due to the lack of timely release of biological factors essential for healing. Topical administration of various therapeutic factors at different stages is shown to enhance the healing rate of chronic wounds. Developing a wound dressing that can deliver biomolecules with a predetermined spatial and temporal pattern would be beneficial for effective treatment of chronic wounds. Here, an actively controlled wound dressing is fabricated using composite fibers with a core electrical heater covered by a layer of hydrogel containing thermoresponsive drug carriers. The fibers are loaded with different drugs and biological factors and are then assembled using textile processes to create a flexible and wearable wound dressing. These fibers can be individually addressed to enable on-demand release of different drugs with a controlled temporal profile. Here, the effectiveness of the engineered dressing for on-demand release of antibiotics and vascular endothelial growth factor (VEGF) is demonstrated for eliminating bacterial infection and inducing angiogenesis in vitro. The effectiveness of the VEGF release on improving healing rate is also demonstrated in a murine model of diabetic wounds.
Instead of plain sterile cotton or other fibers, this dressing is made of “composite fibers with a core electrical heater covered by a layer of hydrogel containing thermoresponsive drug carriers,” which really says it all.
It acts as a regular bandage, protecting the injury from exposure and so on, but attached to it is a stamp-sized microcontroller. When prompted by an app (or an onboard timer, or conceivably sensors woven into the bandage), the microcontroller sends a voltage through certain of the fibers, warming them and activating the medications lying dormant in the hydrogel.
Those medications could be anything from topical anesthetics to antibiotics to more sophisticated things like growth hormones that accelerate healing. More voltage, more medication — and each fiber can carry a different one.
Neocortical regions are organized into columns and layers. Connections between layers run mostly perpendicular to the surface suggesting a columnar functional organization. Some layers have long-range excitatory lateral connections suggesting interactions between columns. Similar patterns of connectivity exist in all regions but their exact role remain a mystery. In this paper, we propose a network model composed of columns and layers that performs robust object learning and recognition. Each column integrates its changing input over time to learn complete predictive models of observed objects. Excitatory lateral connections across columns allow the network to more rapidly infer objects based on the partial knowledge of adjacent columns. Because columns integrate input over time and space, the network learns models of complex objects that extend well beyond the receptive field of individual cells. Our network model introduces a new feature to cortical columns. We propose that a representation of location relative to the object being sensed is calculated within the sub-granular layers of each column. The location signal is provided as an input to the network, where it is combined with sensory data. Our model contains two layers and one or more columns. Simulations show that using Hebbian-like learning rules small single-column networks can learn to recognize hundreds of objects, with each object containing tens of features. Multi-column networks recognize objects with significantly fewer movements of the sensory receptors. Given the ubiquity of columnar and laminar connectivity patterns throughout the neocortex, we propose that columns and regions have more powerful recognition and modeling capabilities than previously assumed.
Not on artificial intelligence per se, but Jeff Hawkins was the first to suggest a unifying theory of how the brain works in 2005 with his book On Intelligence. It’s interesting to see how the theory has been refined in last 12 years and how it might influence today’s development of AI algorithms.
The plan prescribes a high level of government investment in theoretical and applied AI breakthroughs (see Part III below for more), while also acknowledging that, in China as around the world, private companies are currently leading the charge on commercial applications of AI.
The plan acknowledges, meanwhile, that China remains far behind world leaders in development of key hardware enablers of AI, such as microchips suited for machine learning use (e.g., GPUs or re-configurable processors). The plan’s ambition is underlined by its recognition of the hard road ahead.
China is embarking upon an agenda of “intelligentization” (智能化), seeking to take advantage of the transformative potential of AI throughout society, the economy, government, and the military. Through this new plan, China intends to pursue “indigenous innovation” in the “strategic frontier” technology of AI in furtherance of a national strategy for innovation-driven development.
the Chinese government is encouraging its own AI enterprises to pursue an approach of “going out,” including through overseas mergers and acquisitions, equity investments, and venture capital, as well as the establishment of research and development centers abroad.
China plans to develop resources and ecosystems conducive to the goal of becoming a “premier innovation center” in AI science and technology by 2030. In support of this goal, the plan calls for an “open source and open” approach that takes advantage of synergies among industry, academia, research, and applications, including through creating AI “innovation clusters.”
the Chinese leadership wants to ensure that advances in AI can be leveraged for national defense, through a national strategy for military-civil fusion (军民融合). According to the plan, resources and advances will be shared and transferred between civilian and military contexts. This will involve the establishment and normalizing of mechanisms for communication and coordination among scientific research institutes, universities, enterprises, and military industry.
Full translation of China’s State Council Notice on the Issuance of the Next Generation Artificial Intelligence Development Plan – Both the original document and the commentary on NewAmerica are critical reads.
From Home – Neuromation
We propose a solution whose accuracy is guaranteed by construction: synthesizing large datasets along with perfectly accurate labels. The benefits of synthetic data are manifold. It is fast to synthesize and render, perfectly accurate, tailored for the task at hand, and can be modified to improve the model and training itself. It is important to note that real data with accurate labels is still required for evaluating models trained on synthetic data, in order to guarantee acceptable performance at inference time. However, the amount of validation data required is orders of magnitude smaller than training data!
They generate and sell synthetic datasets for AI training. All data is charged per item, and comes pre-labelled.
All transactions get done using Ethereum extended ECR-20 compliant token. People can mine tokens by performing computationally intensive tasks of data generation and model training instead of mining cryptocurrency.
In summer 2016, we met to build a low-cost brain-computer interface that you could plug into your phone. We want everyone interested in BCI technology to be able to try it out.
Two months later, we premiered the world’s first £20 BCI at EMF camp as ‘smartphone-BCI’.
As of summer 2017, we have:
- a simple, two electrode EEG kit that amplifies neural signals, and modulates them for input to an audio jack;
- a basic Android diagnostic app;
- an SSVEP Unity text entry app.
The v0.1 circuit reads a bipolar EEG signal and sends the signal out along an audio cable, for use in a smartphone, tablet, laptop, etc.
EEG signals are difficult to work with as they are very faint, and easily interfered with by other signals, including muscle movements and mains electricity – both of which are much more powerful. Also, the interesting frequencies range between 4Hz to 32Hz (depending on the intended use), but a smartphone sound card will filter out all signals below 20Hz.
Thus, the v0.1 circuit:
- amplifies the signals that comes from the electrodes, boosting them from the microvolt to the millivolt range;
- uses amplitude modulation to add a 1kHz carrier tone, allowing the signal to bypass the 20Hz high-pass filter behind the phone’s audio jack.
As humans, we can perceive less than a ten-trillionth of all light waves. “Our experience of reality,” says neuroscientist David Eagleman, “is constrained by our biology.” He wants to change that. His research into our brain processes has led him to create new interfaces — such as a sensory vest — to take in previously unseen information about the world around us.
A truly radical idea. Mindblowing.
Google’s AI subsidiary DeepMind is getting serious about ethics. The UK-based company, which Google bought in 2014, today announced the formation of a new research group dedicated to the thorniest issues in artificial intelligence. These include the problems of managing AI bias; the coming economic impact of automation; and the need to ensure that any intelligent systems we develop share our ethical and moral values.
DeepMind Ethics & Society (or DMES, as the new team has been christened) will publish research on these topics and others starting early 2018. The group has eight full-time staffers at the moment, but DeepMind wants to grow this to around 25 in a year’s time. The team has six unpaid external “fellows” (including Oxford philosopher Nick Bostrom, who literally wrote the book on AI existential risk) and will partner with academic groups conducting similar research, including The AI Now Institute at NYU, and the Leverhulme Centre for the Future of Intelligence.
Great effort. I’d love to attend a conference arranged by groups like this one.
Experts wonder what it is about deep learning that enables generalization — and to what extent brains apprehend reality in the same way.
Last month, a YouTube video of a conference talk in Berlin, shared widely among artificial-intelligence researchers, offered a possible answer. In the talk, Naftali Tishby, a computer scientist and neuroscientist from the Hebrew University of Jerusalem, presented evidence in support of a new theory explaining how deep learning works. Tishby argues that deep neural networks learn according to a procedure called the “information bottleneck,” which he and two collaborators first described in purely theoretical terms in 1999. The idea is that a network rids noisy input data of extraneous details as if by squeezing the information through a bottleneck, retaining only the features most relevant to general concepts.
According to Tishby, who views the information bottleneck as a fundamental principle behind learning, whether you’re an algorithm, a housefly, a conscious being, or a physics calculation of emergent behavior, that long-awaited answer “is that the most important part of learning is actually forgetting.”
Brenden Lake, an assistant professor of psychology and data science at New York University who studies similarities and differences in how humans and machines learn, said that Tishby’s findings represent “an important step towards opening the black box of neural networks,” but he stressed that the brain represents a much bigger, blacker black box. Our adult brains, which boast several hundred trillion connections between 86 billion neurons, in all likelihood employ a bag of tricks to enhance generalization, going beyond the basic image- and sound-recognition learning procedures that occur during infancy and that may in many ways resemble deep learning.
For instance, Lake said the fitting and compression phases that Tishby identified don’t seem to have analogues in the way children learn handwritten characters, which he studies. Children don’t need to see thousands of examples of a character and compress their mental representation over an extended period of time before they’re able to recognize other instances of that letter and write it themselves. In fact, they can learn from a single example.
The video is here.
From left to right: Elon Musk (Tesla, SpaceX), Stuart Russell (University Berkeley), Bart Selman (Cornell University), Ray Kurzweil (Google, inventor, futurist), David Chalmers (New York University, Australian National University, philosopher), Nick Bostrom (University of Oxford, philosopher), Demis Hassabis (DeepMind), Sam Harris (author, philosopher, neuroscientist, atheist), and Jaan Tallinn (Skype, Kaaza) discuss with Max Tegmark (moderator) what likely outcomes might be if we succeed in building human-level AGI, and also what we would like to happen.
Max Tegmark put in a room some of the brightest minds of our times to discuss Artificial General Intelligence and Superintelligence. This is the video of the most significative panel at that event, the Beneficial AI 2017 conference.
It’s 1h video, totally worth your time.
Anthony Levandowski will be firmly on the side of the machines. In September 2015, the multi-millionaire engineer at the heart of the patent and trade secrets lawsuit between Uber and Waymo, Google’s self-driving car company, founded a religious organization called Way of the Future. Its purpose, according to previously unreported state filings, is nothing less than to “develop and promote the realization of a Godhead based on Artificial Intelligence.”
documents filed with California show that Levandowski is Way of the Future’s CEO and President, and that it aims “through understanding and worship of the Godhead, [to] contribute to the betterment of society.”
There are grave allegations pending Levandowski’s head, and this profile by Wired is all but positive. It would be unfortunate for transhumanism, as an intellectual movement, if those claims end up substantiated.
Right now, the press is giving too much religious emphasis to transhumanism without merit. But I eventually expect some speculators to try and turn it into an actual religion for pure speculation.
Doctors don’t want black-and-white answers, nor does any profession. If you’re a professional, my guess is when you interact with AI, you don’t want it to say, “Here is an answer.” What a doctor wants is, “OK, give me the possible answers. Tell my why you believe it. Can I see the research, the evidence, the ‘percent confident’? What more would you like to know?”
When I went to Davos in January, we published something called Transparency and Trust in the Cognitive Era. It’s our responsibility if we build this stuff to guide it safely into the world. First, be clear on the purpose, work with man. We aren’t out here to destroy man. The second is to be transparent about who trained the computers, who are the experts, where did the data come from. And when consumers are using AI, you inform them that they are and inform the company as well that owns the intellectual property. And the third thing is to be committed to skill.
IBM and its term “cognitive computing” are all about so-called “weak AI”. The problem is that giving the insight about an answer is incredibly challenging at the moment vs just giving the answer in a black-box fashion.
Vizor is a sort of eyewear with clear glasses. But it can also project your patient’s spine in 3D so that you can locate your tools in real time even if it’s below the skin. It has multiple sensors to detect your head movements as well.
Hospitals first have to segment the spine from the rest of the scan, such as soft tissue. They already have all the tools they need to do it themselves.
Then, doctors have to place markers on the patient’s body to register the location of the spine. This way, even if the patient moves while breathing, Vizor can automatically adjust the position of the spine in real time.
Surgeons also need to put markers on standard surgical tools. After a calibration process, Vizor can precisely display the orientation of the tools during the operation. According to Augmedics, it takes 10-20 seconds to calibrate the tools. The device also lets you visualize the implants, such as screws.
Elimelech says that the overall system accuracy is about 1.4mm. The FDA requires a level of accuracy below 2mm.
Remarkable, but hard to explain in words. Watch the video.
Amazon’s first wearable device will be a pair of smart glasses with the Alexa voice assistant built in, according to a report in the Financial Times. The device will reportedly look like a regular pair of glasses and use bone-conduction technology so that the user can hear Alexa without the need for earphones or conventional speakers. It won’t, however, likely have a screen or camera, although Google Glass founder Babak Parviz has apparently been working on the project following his hiring by Amazon in 2014
Google failed at this in the same way Microsoft failed at tablets before Apple introduced the iPad. Execution is everything, and maybe glasses that only offer voice user interface is a more manageable first step than featuring augmented vision too.
On the other side, so far, Amazon didn’t shine as a hardware vendor. Their Android phone, a primary vector for Alexa, was a failure. The other devices they sell are OK but not memorable, and not aesthetically pleasing (which becomes important in fashion accessories like glasses).
One final thought: Amazon long-term goal is to have Alexa everywhere, so these glasses will get increasingly cheaper (like Kindle devices do) or Amazon will find a way to apply the same technology to every glass on the market.
The Nuada is a smart glove. It gives back hand strength and coordination by augmenting the motions of your palm and digits. It acts as an electromechanical support system that lets you perform nearly superhuman feats or simply perform day-to-day tasks. The glove contains electronic tendons that can help the hand open and close and even perform basic motions and a sensor tells doctors and users about their pull strength, dexterity, and other metrics.
“We then use our own electromechanical system to support the user in the movement he wants to do,” said Quinaz. “This makes us able to support incredible weights with a small system, that needs much less energy to function. We can build the first mass adopted exoskeleton solutions with our technology.”
On it are the PowerPoint slides of his next big project, a breathtaking $100 million, five-year proposal focused on paralysis, depression, amputation, epilepsy, and Parkinson’s disease. Herr is still trying to raise the money, and the work will be funneled through his new brainchild, MIT’s Center for Extreme Bionics, a team of faculty and researchers assembled in 2014 that he codirects. After exploring various interventions for each condition, Herr and his colleagues will apply to the FDA to conduct human trials. One to-be-explored intervention in the brain might, with the right molecular knobs turned, augment empathy. “If we increase human empathy by 30 percent, would we still have war?” Herr asks. “We may not.”
The idea of an endlessly upgradable human is something Herr feels in his bones. “I believe in the near future, in a decade or two, when you walk down the streets of Boston, you’ll routinely see people wearing bionic systems,” Herr told ABC News in a 2016 interview. In 100 years, he thinks the human form will be unrecognizable. The inference is that the abnormal will be normal, beauty rethought and reborn. Unusual people like Herr will have come home.
The maker of the world’s first commercial artificial retina, which provides partial sight to people with a certain form of blindness, is launching a clinical trial for a brain implant designed to restore vision to more patients. The company, Second Sight, is testing whether an array of electrodes placed on the surface of the brain can return limited vision to people who have gone partially or completely blind.
Also known as a bionic eye, all three devices are intended to bring back some vision in patients with a genetic eye disorder called retinitis pigmentosa. The disease causes gradual vision loss when light-sensing cells called photoreceptors break down in the retina—the tissue membrane that coats the back of the eye.
The new device, the Orion, borrows about 90 percent of its technology from the Argus II but bypasses the eye. Instead, an array of electrodes is placed on the surface of the visual cortex, the part of the brain that processes visual information. Delivering electrical pulses here should tell the brain to perceive patterns of light.
A major downside is the device requires a more invasive surgery than the Argus II. A small section of the skull needs to be removed to expose the area of the brain where the array of electrodes is placed. Because electrical brain implants carry risks like infection or seizures, the first clinical trial will be small, and the company will start off by testing the implant in patients who are completely blind.
Capt. Scott Kraft, commanding officer at the Naval Surface Warfare Center Indian Head technology division in Maryland, said artificial intelligence and big data analytics could potentially help technicians more quickly recognize exactly what type of bomb they are dealing with and choose the best option for neutralizing it. The vast amount of data collected during the past 16 years of war could be exploited to make faster decisions in combat situations, he said.
AI could also help EOD forces defeat electronic warfare threats by detecting sources of transmission and interference, officials said.
“The electromagnetic spectrum is now the new high ground on the battlefield,” Young said. U.S. troops “have to have situational awareness of it, what’s happening and why, and if we don’t we’re going to be at a disadvantage.”
Signals interference can impede the operations of robots and other EOD tools.
“If you’ve been to theater lately … you’ve heard about a lot of the counter-UAS systems along with all the jammers, along with all the electronic warfare systems,” Young said.
“It becomes very complex. So we want to try to simplify that” for operators that aren’t EW experts, Young said.
The whole article is about artificial intelligence and drone technologies applied to explosive ordnance disposal. However, reading it, it’s easy to see how AI is considered a strategic weapon and could be used for many applications, not just improvised explosive device (IED) discovery and disposal. And some military organizations have very large data sets to train AI.
The possible applications go all the way to the supersoldier scenarios, as I heard from at least one startup.
No surprise Putin said that whoever leads in AI will rule the world.
Venter and colleagues at his company Human Longevity, Inc. (HLI), based in San Diego, California, sequenced the whole genomes of 1,061 people of varying ages and ethnic backgrounds. Using the genetic data, along with high-quality 3D photographs of the participants’ faces, the researchers used an artificial intelligence approach to find small differences in DNA sequences, called SNPs, associated with facial features such as cheekbone height. The team also searched for SNPs that correlated with factors including a person’s height, weight, age, vocal characteristics and skin colour.
The approach correctly identified an individual out of a group of ten people randomly selected from HLI’s database 74% of the time. The findings, according to the paper, suggest that law-enforcement agencies, scientists and others who handle human genomes should protect the data carefully to prevent people from being identified by their DNA alone.
The scientific community, including a co-author (who works for Apple), suggests that the paper misrepresented the data.
The point is that we are going in that direction and the progress is remarkable. The scientist reviewing the paper for Nature said:
HLI’s actual data are sound, and he is impressed with the group’s novel method of determining age by sequencing the ends of chromosomes, which shorten over time.
This is a real time demonstration of our autonomous checkout system, running at 30 FPS. This system includes our models for person detection, entity tracking, item detection, item classification, ownership resolution, action analysis, and shopper inventory analysis, all working together to visualize which person has what item in real time.
A few days ago, I shared a TED Talk about real-time face recognition. It was impressive. What I am sharing right now is even more impressive: real-time people and object recognition during online shopping.
Online shopping is just one (very lucrative) application. The technology shown in this video has been developed by a company called Standard Cognition, but it’s very likely similar to the one that Amazon is testing in their first retail shop.
Of course, there are many other applications, like surveillance for law enforcement, or information gathering for “smart communication”. Imagine this technology used in augmented reality.
Once smart contact lenses will be a reality, this will be inevitable.
We show that faces contain much more information about sexual orientation than can be perceived and interpreted by the human brain. We used deep neural networks to extract features from 35,326 facial images. These features were entered into a logistic regression aimed at classifying sexual orientation.
Given a single facial image, a classifier could correctly distinguish between gay and heterosexual men in 81% of cases, and in 74% of cases for women. Human judges achieved much lower accuracy: 61% for men and 54% for women. The accuracy of the algorithm increased to 91% and 83%, respectively, given five facial images per person. Facial features employed by the classifier included both fixed (e.g., nose shape) and transient facial features (e.g., grooming style).
Consistent with the prenatal hormone theory of sexual orientation, gay men and women tended to have gender-atypical facial morphology, expression, and grooming styles. Prediction models aimed at gender alone allowed for detecting gay males with 57% accuracy and gay females with 58% accuracy.
Those findings advance our understanding of the origins of sexual orientation and the limits of human perception. Additionally, given that companies and governments are increasingly using computer vision algorithms to detect people’s intimate traits, our findings expose a threat to the privacy and safety of gay men and women.
Let me reiterate: The accuracy of the algorithm increased to 91% and 83%, respectively, given five facial images per person.
Imagine if this analysis would be incorporated into the hiring process and used to discriminate candidates.
Although cellular therapies represent a promising strategy for a number of conditions, current approaches face major translational hurdles, including limited cell sources and the need for cumbersome pre-processing steps (for example, isolation, induced pluripotency). In vivo cell reprogramming has the potential to enable more-effective cell-based therapies by using readily available cell sources (for example, fibroblasts) and circumventing the need for ex vivo pre-processing.
Existing reprogramming methodologies, however, are fraught with caveats, including a heavy reliance on viral transfection. Moreover, capsid size constraints and/or the stochastic nature of status quo approaches (viral and non-viral) pose additional limitations, thus highlighting the need for safer and more deterministic in vivo reprogramming methods.
Here, we report a novel yet simple-to-implement non-viral approach to topically reprogram tissues through a nanochannelled device validated with well-established and newly developed reprogramming models of induced neurons and endothelium, respectively. We demonstrate the simplicity and utility of this approach by rescuing necrotizing tissues and whole limbs using two murine models of injury-induced ischaemia.
With this technology, we can convert skin cells into elements of any organ with just one touch. This process only takes less than a second and is non-invasive, and then you’re off. The chip does not stay with you, and the reprogramming of the cell starts. Our technology keeps the cells in the body under immune surveillance, so immune suppression is not necessary,” said Sen, who also is executive director of Ohio State’s Comprehensive Wound Center.
TNT technology has two major components: First is a nanotechnology-based chip designed to deliver cargo to adult cells in the live body. Second is the design of specific biological cargo for cell conversion. This cargo, when delivered using the chip, converts an adult cell from one type to another, said first author Daniel Gallego-Perez, an assistant professor of biomedical engineering and general surgery who also was a postdoctoral researcher in both Sen’s and Lee’s laboratories.
TNT doesn’t require any laboratory-based procedures and may be implemented at the point of care. The procedure is also non-invasive. The cargo is delivered by zapping the device with a small electrical charge that’s barely felt by the patient.
Hugh Herr is building the next generation of bionic limbs, robotic prosthetics inspired by nature’s own designs. Herr lost both legs in a climbing accident 30 years ago; now, as the head of the MIT Media Lab’s Biomechatronics group, he shows his incredible technology in a talk that’s both technical and deeply personal — with the help of ballroom dancer Adrianne Haslet-Davis, who lost her left leg in the 2013 Boston Marathon bombing, and performs again for the first time on the TED stage.
Addressing disabilities is just the beginning. You can tell that Herr wants bionic prosthetics to augment humans beyond their limits.
An incredible TED Talk.
Renowned computer scientist Ben Shneiderman has a plan on how to ensure algorithmic accountability. The University of Maryland professor and founder of its Human-Computer Interaction Lab outlined his strategy at the 2017 Turing Lecture on Tuesday.”What I’m proposing is a National Algorithm Safety Board,” Shneiderman told the audience in London’s British Library.The board would provide three forms of independent oversight: planning, continuous monitoring, and retrospective analysis. Combined they provide a basis to ensure the correct system is selected then supervised and lessons can be learnt to make better algorithms in future.
Machine intelligence is here, and we’re already using it to make subjective decisions. But the complex way AI grows and improves makes it hard to understand and even harder to control. In this cautionary talk, techno-sociologist Zeynep Tufekci explains how intelligent machines can fail in ways that don’t fit human error patterns — and in ways we won’t expect or be prepared for. “We cannot outsource our responsibilities to machines,” she says. “We must hold on ever tighter to human values and human ethics.”
Another exceptional TED Talk.
Meet Todai Robot, an AI project that performed in the top 20 percent of students on the entrance exam for the University of Tokyo — without actually understanding a thing. While it’s not matriculating anytime soon, Todai Robot’s success raises alarming questions for the future of human education. How can we help kids excel at the things that humans will always do better than AI?
The key idea of this beautiful talk:
we humans can understand the meaning. That is something which is very, very lacking in AI. But most of the students just pack the knowledge without understanding the meaning of the knowledge, so that is not knowledge, that is just memorizing, and AI can do the same thing. So we have to think about a new type of education.
The first cancer treatment that involves reprogramming a patient’s own blood cells to fight cancer has been approved by the US Food and Drug Administration, leading the way for federal approval of other, similar efforts.
Kymriah is manufactured by the pharmaceutical company Novartis AG to treat children with acute lymphoblastic leukemia (ALL). It’s shown very encouraging results in clinical trials, but the price tag will be hefty: Analysts say it will cost “a fortune,” or maybe $700,000 for one course of treatment.
Kymriah is one type of so-called CAR-T cancer therapies. First, doctors take the patient’s white blood cells, or T cells, out of the body and add a special receptor called a chimeric antigen receptor (CAR). The receptor gives the T cells the ability to attack cancer cells. Then, these engineered cells are put back into the body. It’s a highly personalized form of medicine, since each dose must be tailored to the patient.
Beyond the more common chemical delivery strategies, several physical techniques are used to open the lipid bilayers of cellular membranes. These include using electric and magnetic fields, temperature, ultrasound or light to introduce compounds into cells, to release molecular species from cells or to selectively induce programmed cell death (apoptosis) or uncontrolled cell death (necrosis).
More recently, molecular motors and switches that can change their conformation in a controlled manner in response to external stimuli have been used to produce mechanical actions on tissue for biomedical applications. Here we show that molecular machines can drill through cellular bilayers using their molecular-scale actuation, specifically nanomechanical action.
Upon physical adsorption of the molecular motors onto lipid bilayers and subsequent activation of the motors using ultraviolet light, holes are drilled in the cell membranes. We designed molecular motors and complementary experimental protocols that use nanomechanical action to induce the diffusion of chemical species out of synthetic vesicles, to enhance the diffusion of traceable molecular machines into and within live cells, to induce necrosis and to introduce chemical species into live cells.
We also show that, by using molecular machines that bear short peptide addends, nanomechanical action can selectively target specific cell-surface recognition sites. Beyond the in vitro applications demonstrated here, we expect that molecular machines could also be used in vivo, especially as their design progresses to allow two-photon, near-infrared and radio-frequency activation.