By monitoring brain activity, the system can detect in real-time if a person notices an error as a robot does a task. Using an interface that measures muscle activity, the person can then make hand gestures to scroll through and select the correct option for the robot to execute.
For the project the team used “Baxter,” a humanoid robot from Rethink Robotics. With human supervision, the robot went from choosing the correct target 70 percent of the time to more than 97 percent of the time.
To create the system the team harnessed the power of electroencephalography (EEG) for brain activity and electromyography (EMG) for muscle activity, putting a series of electrodes on the users’ scalp and forearm.
Both metrics have some individual shortcomings: EEG signals are not always reliably detectable, while EMG signals can sometimes be difficult to map to motions that are any more specific than “move left or right.” Merging the two, however, allows for more robust bio-sensing and makes it possible for the system to work on new users without training.
MIT researchers have developed a computer interface that can transcribe words that the user verbalizes internally but does not actually speak aloud.
The system consists of a wearable device and an associated computing system. Electrodes in the device pick up neuromuscular signals in the jaw and face that are triggered by internal verbalizations — saying words “in your head” — but are undetectable to the human eye. The signals are fed to a machine-learning system that has been trained to correlate particular signals with particular words.
The device also includes a pair of bone-conduction headphones, which transmit vibrations through the bones of the face to the inner ear. Because they don’t obstruct the ear canal, the headphones enable the system to convey information to the user without interrupting conversation or otherwise interfering with the user’s auditory experience.
Using the prototype wearable interface, the researchers conducted a usability study in which 10 subjects spent about 15 minutes each customizing the arithmetic application to their own neurophysiology, then spent another 90 minutes using it to execute computations. In that study, the system had an average transcription accuracy of about 92 percent.
But, Kapur says, the system’s performance should improve with more training data, which could be collected during its ordinary use. Although he hasn’t crunched the numbers, he estimates that the better-trained system he uses for demonstrations has an accuracy rate higher than that reported in the usability study.
Sci-fi movies shaped the collective imaginary about neural interfaces as some sort of hardware port or dongle sticking out of the neck and connecting the human brain to the Internet. But that approach, assuming it’s even possible, is still far away into the future.
This approach is much more feasible. Imagine if this object, AlterEgo, would become the main computer peripheral, replacing keyboard and mouse.
The question is not just about the accuracy, but also how its speed compared to existing input methods.
Watch the video.
From MIT IQ Launch
On March 1, we convened at Kresge Auditorium on the MIT campus to set out on the MIT Intelligence Quest — an Institute-wide initiative on human and machine intelligence research, its applications, and its bearing on society.
MIT faculty, alumni, students, and friends talked about their work across all aspects of this domain — from unpublished research, to existing commercial enterprises, to the social and ethical implications of AI.
Learn why and how MIT is primed to take the next breakthrough step in advancing the science and applications of intelligence by clicking on the available presentations below.
MIT announced the Intelligence Quest in February. This is the whole launch event, when dozens of presentations were recorded and are now available online.
At a time of rapid advances in intelligence research across many disciplines, the Intelligence Quest — MIT IQ — will encourage researchers to investigate the societal implications of their work as they pursue hard problems lying beyond the current horizon of what is known.
Some of these advances may be foundational in nature, involving new insight into human intelligence, and new methods to allow machines to learn effectively. Others may be practical tools for use in a wide array of research endeavors, such as disease diagnosis, drug discovery, materials and manufacturing design, automated systems, synthetic biology, and finance.
MIT is poised to lead this work through two linked entities within MIT IQ. One of them, “The Core,” will advance the science and engineering of both human and machine intelligence. A key output of this work will be machine-learning algorithms. At the same time, MIT IQ seeks to advance our understanding of human intelligence by using insights from computer science.
The second entity, “The Bridge” will be dedicated to the application of MIT discoveries in natural and artificial intelligence to all disciplines, and it will host state-of-the-art tools from industry and research labs worldwide.
The Bridge will provide a variety of assets to the MIT community, including intelligence technologies, platforms, and infrastructure; education for students, faculty, and staff about AI tools; rich and unique data sets; technical support; and specialized hardware
In order to power MIT IQ and achieve results that are consistent with its ambitions, the Institute will raise financial support through corporate sponsorship and philanthropic giving.
MIT IQ will build on the model that was established with the MIT–IBM Watson AI Lab
What a phenomenal initiative. And MIT is one of the top places in the world to be for AI research.
Artificial General Intelligence might come out of this project.
Researchers in the emerging field of “neuromorphic computing” have attempted to design computer chips that work like the human brain. Instead of carrying out computations based on binary, on/off signaling, like digital chips do today, the elements of a “brain on a chip” would work in an analog fashion, exchanging a gradient of signals, or “weights,” much like neurons that activate in various ways depending on the type and number of ions that flow across a synapse.
In this way, small neuromorphic chips could, like the brain, efficiently process millions of streams of parallel computations that are currently only possible with large banks of supercomputers. But one significant hangup on the way to such portable artificial intelligence has been the neural synapse, which has been particularly tricky to reproduce in hardware.
Now engineers at MIT have designed an artificial synapse in such a way that they can precisely control the strength of an electric current flowing across it, similar to the way ions flow between neurons. The team has built a small chip with artificial synapses, made from silicon germanium. In simulations, the researchers found that the chip and its synapses could be used to recognize samples of handwriting, with 95 percent accuracy
Instead of using amorphous materials as an artificial synapse, Kim and his colleagues looked to single-crystalline silicon, a defect-free conducting material made from atoms arranged in a continuously ordered alignment. The team sought to create a precise, one-dimensional line defect, or dislocation, through the silicon, through which ions could predictably flow.
The researchers fabricated a neuromorphic chip consisting of artificial synapses made from silicon germanium, each synapse measuring about 25 nanometers across. They applied voltage to each synapse and found that all synapses exhibited more or less the same current, or flow of ions, with about a 4 percent variation between synapses — a much more uniform performance compared with synapses made from amorphous material.
They also tested a single synapse over multiple trials, applying the same voltage over 700 cycles, and found the synapse exhibited the same current, with just 1 percent variation from cycle to cycle.
Commercialization is very far away from this, but what we are talking here is building the foundation for artificial general intelligence (AGI), and before that, for narrow AI that can be embedded in clothes and everyday objects, not just in smartphones and other electronic devices.
Imagine the possibilities if an AI chip would be as cheap, small and ubiquitous as Bluetooth chips are today.
The human genome contains six billion DNA letters, or chemical bases known as A, C, G and T. These letters pair off—A with T and C with G—to form DNA’s double helix. Base editing, which uses a modified version of CRISPR, is able to change a single one of these letters at a time without making breaks to DNA’s structure.
That’s useful because sometimes just one base pair in a long strand of DNA gets swapped, deleted, or inserted—a phenomenon called a point mutation. Point mutations make up 32,000 of the 50,000 changes in the human genome known to be associated with diseases.
In the Nature study, researchers led by David Liu, a Harvard chemistry professor and member of the Broad Institute, were able to change an A into a G. Such a change would address about half the 32,000 known point mutations that cause disease.
To do it, they modified CRISPR so that it would target just a single base. The editing tool was able to rearrange the atoms in an A so that it instead resembled a G, tricking cells into fixing the other DNA strand to complete the switch. As a result, an A-T base pair became a G-C one. The technique essentially rewrites errors in the genetic code instead of cutting and replacing whole chunks of DNA.
The new method is also called ABE (adenine base editors).
before ABE can be tried in human patients, Liu says, doctors would need to determine when to intervene in the course of a genetic disease. They would also have to figure out how to best deliver the gene editor to the relevant cells—and to prove the approach is safe and effective enough to make a difference for the patient.
The ABE gene-editing process is efficient, effectively editing the relevant spot in the genome an average of 53 percent of the time across 17 tested sites, Liu said. It caused undesired effects less than 0.1 percent of the time, he added. That success rate is comparable with what CRISPR can do when it is cutting genes.
It’s such an incredible moment to work (and invest) in life sciences.
RNA has important and diverse roles in biology, but molecular tools to manipulate and measure it are limited. For example, RNA interference can efficiently knockdown RNAs, but it is prone to off-target effects, and visualizing RNAs typically relies on the introduction of exogenous tags. Here we demonstrate that the class 2 type VI RNA-guided RNA-targeting CRISPR–Cas effector Cas13a (previously known as C2c2) can be engineered for mammalian cell RNA knockdown and binding.
After initial screening of 15 orthologues, we identified Cas13a from Leptotrichia wadei (LwaCas13a) as the most effective in an interference assay in Escherichia coli. LwaCas13a can be heterologously expressed in mammalian and plant cells for targeted knockdown of either reporter or endogenous transcripts with comparable levels of knockdown as RNA interference and improved specificity. Catalytically inactive LwaCas13a maintains targeted RNA binding activity, which we leveraged for programmable tracking of transcripts in live cells.
Our results establish CRISPR–Cas13a as a flexible platform for studying RNA in mammalian cells and therapeutic development.
Chronic wounds do not heal in an orderly fashion in part due to the lack of timely release of biological factors essential for healing. Topical administration of various therapeutic factors at different stages is shown to enhance the healing rate of chronic wounds. Developing a wound dressing that can deliver biomolecules with a predetermined spatial and temporal pattern would be beneficial for effective treatment of chronic wounds. Here, an actively controlled wound dressing is fabricated using composite fibers with a core electrical heater covered by a layer of hydrogel containing thermoresponsive drug carriers. The fibers are loaded with different drugs and biological factors and are then assembled using textile processes to create a flexible and wearable wound dressing. These fibers can be individually addressed to enable on-demand release of different drugs with a controlled temporal profile. Here, the effectiveness of the engineered dressing for on-demand release of antibiotics and vascular endothelial growth factor (VEGF) is demonstrated for eliminating bacterial infection and inducing angiogenesis in vitro. The effectiveness of the VEGF release on improving healing rate is also demonstrated in a murine model of diabetic wounds.
Instead of plain sterile cotton or other fibers, this dressing is made of “composite fibers with a core electrical heater covered by a layer of hydrogel containing thermoresponsive drug carriers,” which really says it all.
It acts as a regular bandage, protecting the injury from exposure and so on, but attached to it is a stamp-sized microcontroller. When prompted by an app (or an onboard timer, or conceivably sensors woven into the bandage), the microcontroller sends a voltage through certain of the fibers, warming them and activating the medications lying dormant in the hydrogel.
Those medications could be anything from topical anesthetics to antibiotics to more sophisticated things like growth hormones that accelerate healing. More voltage, more medication — and each fiber can carry a different one.
On it are the PowerPoint slides of his next big project, a breathtaking $100 million, five-year proposal focused on paralysis, depression, amputation, epilepsy, and Parkinson’s disease. Herr is still trying to raise the money, and the work will be funneled through his new brainchild, MIT’s Center for Extreme Bionics, a team of faculty and researchers assembled in 2014 that he codirects. After exploring various interventions for each condition, Herr and his colleagues will apply to the FDA to conduct human trials. One to-be-explored intervention in the brain might, with the right molecular knobs turned, augment empathy. “If we increase human empathy by 30 percent, would we still have war?” Herr asks. “We may not.”
The idea of an endlessly upgradable human is something Herr feels in his bones. “I believe in the near future, in a decade or two, when you walk down the streets of Boston, you’ll routinely see people wearing bionic systems,” Herr told ABC News in a 2016 interview. In 100 years, he thinks the human form will be unrecognizable. The inference is that the abnormal will be normal, beauty rethought and reborn. Unusual people like Herr will have come home.
Hugh Herr is building the next generation of bionic limbs, robotic prosthetics inspired by nature’s own designs. Herr lost both legs in a climbing accident 30 years ago; now, as the head of the MIT Media Lab’s Biomechatronics group, he shows his incredible technology in a talk that’s both technical and deeply personal — with the help of ballroom dancer Adrianne Haslet-Davis, who lost her left leg in the 2013 Boston Marathon bombing, and performs again for the first time on the TED stage.
Addressing disabilities is just the beginning. You can tell that Herr wants bionic prosthetics to augment humans beyond their limits.
An incredible TED Talk.
a group of researchers at MIT have developed a remote sleep sensing system that uses radio waves to capture data about your brain waves while you sleep–and AI to read them–without ever touching your body. It consists of a laptop-sized wireless device that emits radio signals. When put in the user’s bedroom, the waves detect even the slightest movement of the body. The system doesn’t just do the job of a sleep-tracking wearable without the wearable; it also just provides data at a similar level of accuracy as a sleep lab.
In order to cut out all the extraneous information her system records, she developed a machine learning algorithm that can extract sleep stages–light, deep, and REM sleep–out of the mess of data. The algorithm was trained on a sleep dataset of 25 individuals for a total of 100 nights of sleep, taken using an FDA-approved device that uses EEG to record brain waves.
The results are 80% accurate