Author: Alessandro

“About 100 people right now, I think over time they might 10,000 or more people at Neuralink”

From Neuralink Progress Update, Summer 2020:

0:00 – Stream Start

0:30 – Intro Video

1:02 – Start of Presentation
1:07 – Welcome from Elon Musk
1:16 – Emphasis that this presentation is to encourage recruiting
1:44 – The purpose of Neuralink
3:40 – Current Medical Research and Available Tech
5:42 – Neuralink and Development
6:11 – The Link itself
8:26 – Charging The Link
8:47 – Getting a Link
10:05 – Surgical/Implantation Robot
10:46 – (Mildly Gruesome) Video of Electrode Insertion

11:42 – Tech Demo in Pigs
12:12 – Joyce (Pig with no Neural Implant)
12:58 – Dorothy (Pig that formerly had a Neural Implant, was later removed)
13:32 – Trouble getting Gertrude into the outside pen
15:34 – Gertrude (Pig with Neural Implant, beeps when Snout stimulated)
17:15 – Pigs with two Neural Implants

17:58 – Reading Brain Activity
18:20 – Reading Neurons to predict joint positions of a Pig on a Treadmill

19:10 – Neuron Stimulation with Implants (“Writing to the Brain”)
19:27 – Analysing Neuron Stimulation with Two-Photon Microscopy

20:41 – Specs of Initial Device
21:30 – Neuralink progress towards Clinical Studies
22:20 – Further emphasis that this presentation is to encourage recruiting

23:39 – Start of Live Q&A
24:53 – How is Spike Detection implemented and What is a Spike?
27:00 – What can be further done to simplify the device installation process?
27:41 – Anything specifically to do with the Robot?
28:25 – What are some of the lower Bandwidth activities to target first?
29:15 – Can you summon a Tesla telepathically?
29:45 – How do you see the device and API developing over time?
30:38 – Will the device ever be used for gaming?
31:03 – Is the device limited to surface layers of the Brain?
34:00 – What is the most challenging problem that must be solved to meet Neuralink’s goal?
36:44 – How thin are the Electrodes and possibility of Upgrades?
38:09 – What are the “Read/Write speeds” of the device?
40:34 – How big is the Neuralink Team and how do you expect it to grow?
41:32 – How does the system fair against outside signal disturbances?
42:44 – Audience Question: What are some other applications for the device?
44:55 – How is the device protected from the body?
46:42 – Will you be able to save and replay memories in the future?
47:32 – Animal care in Neuralink
49:19 – What programming language is being used in developing the device and Robot?
50:10 – But can it run Crysis?
50:59 – Can the device be used to eventually explain Consciousness?
52:33 – The Security of the device
54:11 – Any points the team find cool about the device?
54:51 – Positives of using Pigs as a model for development
56:50 – Availability and Cost of the device and implantation
58:02 – Architecture of the device
59:54 – How does the integrity of the device compare to Bone and the area of implantation?

1:01:55 – Closing questions and comments
1:02:00 – Flexibility and width of the Threads
1:02:27 – Length of the Threads
1:02:40 – Movements of the Threads and other improvements to biocompatibility
1:02:58 – (Down the line of staff) What is number one on your wishlist for the device?

1:12:10 – Closing statement from Elon Musk

Too many important things to quote. Just watch the whole presentation. Especially the Q&A.

Instead of replacing one damaged cornea with one healthy one, you could grow enough cells from one donated cornea to print 50 artificial ones

From Scientists have 3D printed the most advanced artificial cornea ever using human cells – The Verge

It was tricky to find the right recipe for an ink that’s thin enough to squirt through a 3D printer’s nozzle, says Che Connon, a tissue engineer at Newcastle University who was one of the creators of the artificial cornea. This bio-ink didn’t just have to be thin — it also had to be stiff enough that it could hold its shape as a 3D structure. To get the right consistency, the researchers added a jelly-like goo called alginate and stem cells extracted from donor corneas, along with some ropy proteins called collagen.

But there’s still a long way to go before these artificial corneas will even get close to a human eyeball: Connon wants to fine tune the printing process first, he says, and the artificial cornea will also need to go through safety studies in animals. But this study is proof that you can 3D print something that looks like a cornea, and contains mostly the same ingredients. It’s also the first time researchers have recreated the cornea’s distinctive, curved shape.

If and when this technique is perfected, tech-augmented corneas in place of smart contact lenses is not an unthinkable scenario.

US Department of Defense has 592 projects powered by Artificial Intelligence

From Pentagon developing artificial intelligence center

Speaking at the House Armed Services Committee April 12, Mattis said “we’re looking at a joint office where we would concentrate all of DoD’s efforts, since we have a number of AI efforts underway right now. We’re looking at pulling them all together.”

He added that the department counts 592 projects as having some form of AI in them, but noted that not all of those make sense to tie into an AI center. And Griffin wants to make sure smaller projects that are close to completion get done and out into prototyping, rather than tied up in the broader AI project.

And then, of course, there are those AI projects so secret that they won’t even be listed among those 592. It would be interesting to see how many of these relate to the super-soldier use case.

A brain-scanner could be an instrument of explicit coercion

From TED 2018: Thought-Reading Machines and the Death of Love | WIRED

The San Francisco startup is developing an optical imaging system—sufficiently compact to fit inside a skull cap, wand, or bandage—that scatters and captures near-infrared light inside our bodies to create holograms that reveal our occluded selves. The devices could diagnose cancers as well as cardiovascular or other diseases. But because the wavelength of near-infrared light is smaller than a micron, smaller than the smallest neuron, Jepsen believes the resolution of the technology is fine enough to make thoughts visible to.

and

the company’s promise depended on combining these elements: proof of the entire body’s translucence; holographic techniques, some dating to the 1960s; and Asian silicon manufacturing, which can make new chip architectures into commercial products. Openwater may be less than two years old, but Jepsen has been thinking about a holographic scanner for decades. She is uniquely suited to the challenge. Her early research was in holography; she led display development at Intel, Google X, and Facebook Oculus; and she has shipped billions of dollars of chips.

and

The idea derives from Jack Gallant, a cognitive neuroscientist at UC Berkeley, who decoded movies shown to subjects in a functional MRI machine by scanning the oxygenated blood in their brains. The images Gallant recovered are blurry, because the resolution of fMRI is comparatively coarse. Holography would not only see blood better but capture the electrochemical pulses of the neurons themselves.

Wearable device picks up neuromuscular signals saying words “in your head”

From Computer system transcribes words users “speak silently” | MIT News

MIT researchers have developed a computer interface that can transcribe words that the user verbalizes internally but does not actually speak aloud.

The system consists of a wearable device and an associated computing system. Electrodes in the device pick up neuromuscular signals in the jaw and face that are triggered by internal verbalizations — saying words “in your head” — but are undetectable to the human eye. The signals are fed to a machine-learning system that has been trained to correlate particular signals with particular words.

The device also includes a pair of bone-conduction headphones, which transmit vibrations through the bones of the face to the inner ear. Because they don’t obstruct the ear canal, the headphones enable the system to convey information to the user without interrupting conversation or otherwise interfering with the user’s auditory experience.

and

Using the prototype wearable interface, the researchers conducted a usability study in which 10 subjects spent about 15 minutes each customizing the arithmetic application to their own neurophysiology, then spent another 90 minutes using it to execute computations. In that study, the system had an average transcription accuracy of about 92 percent.

But, Kapur says, the system’s performance should improve with more training data, which could be collected during its ordinary use. Although he hasn’t crunched the numbers, he estimates that the better-trained system he uses for demonstrations has an accuracy rate higher than that reported in the usability study.

Sci-fi movies shaped the collective imaginary about neural interfaces as some sort of hardware port or dongle sticking out of the neck and connecting the human brain to the Internet. But that approach, assuming it’s even possible, is still far away into the future.

This approach is much more feasible. Imagine if this object, AlterEgo, would become the main computer peripheral, replacing keyboard and mouse.
The question is not just about the accuracy, but also how its speed compared to existing input methods.

Watch the video.

From 3D Printing to Bioprinting and Precision Medicine

From How 3D printing is revolutionizing healthcare as we know it | TechCrunch

3D printing is performed by telling a computer to apply layer upon layer of a specific material (quite often plastic or metal powders), molding them one layer at a time until the final product — be it a toy, a pair of sunglasses or a scoliosis brace — is built. Medical technology is now harnessing this technology and building tiny organs, or “organoids,” using the same techniques, but with stem cells as the production material. These organoids, once built, will in the future be able to grow inside the body of a sick patient and take over when an organic organ, such as a kidney or liver, fails.

researchers in Spain have now taken the mechanics of 3D printing — that same careful layer-upon-layer approach in which we can make just about anything — and revealed a 3D bioprinter prototype that can produce human skin. The researchers, working with a biological ink that contains both human plasma as well as material extracts taken from skin biopsies, were able to print about 100 square centimeters of human skin in the span of about half an hour.

and

A 3D-printed pill, unlike a traditionally manufactured capsule, can house multiple drugs at once, each with different release times. This so-called “polypill” concept has already been tested for patients with diabetes and is showing great promise.

An exoskeleton for athletes and older skiers

From This Affordable Exoskeleton Can Make You A Better Skier

Roam’s founder and CEO is Tim Swift, a longtime veteran of Ekso Bionics, one of the world’s leaders in exoskeletons. Swift loved what Ekso was building, but balked at the hefty price tag that came with systems designed to help the disabled walk. Building devices that aren’t accessible to the masses didn’t make sense to him anymore. So he struck out on his own, aiming to democratize exoskeletons.

and

Roam is using plastics and fabrics, and air for transmission. The company’s core insight, Swift says, is a unique fabric actuator that’s very lightweight, yet strong for its volume and weight. The system relies on valves and a backpack power pack to provide torque to the legs. It also has a machine learning element that’s meant to understand how you ski, and anticipate when you’re going to make a turn in order to deliver the extra torque just when you want it.

When ready for market, the skiing exoskeleton is expected to weigh under 10 pounds, including about four or five pounds of equipment that goes in the backpack.

From This skiing exoskeleton is designed to take the strain off your legs – The Verge

The company claims the exoskeleton will make older skiers feel years younger and able to stay out on the slope for longer. And for athletes, the device will supposedly help them train for days in a row with less fatigue.

So far the company has only built prototypes, but it’s in the process of finalizing a commercial product, set for release in January 2019. Interested skiers can pay $99 to reserve a unit, although the final price is expected to be somewhere between $2,000 and $2,500.

Exoskeletons have a few of clear use cases: people with disabilities, heavy lifting workers, and supersoldiers. Athletes and healthy people that want to enjoy sports in their later years are interesting new possibilities.

MIT terminates collaboration with Nectome

From MIT severs ties to company promoting fatal brain uploading – MIT Technology Review

According to an April 2 statement, MIT will terminate Nectome’s research contract with Media Lab professor and neuroscientist Edward Boyden.

MIT’s connection to the company drew sharp criticism from some neuroscientists, who say brain uploading isn’t possible.

“Fundamentally, the company is based on a proposition that is just false. It is something that just can’t happen,” says Sten Linnarsson of the Karolinska Institute in Sweden.

He adds that by collaborating with Nectome, MIT had lent credibility to the startup and increased the chance that “some people actually kill themselves to donate their brains.”

It didn’t take long.

It’s hard enough to stand the pressure of the press and public opinion for normal companies. It must be impossibly hard to do so when you try to commercialize an attempt to escape death.

Many of the companies that are covered here on H+ face the same challenge.

AR glasses further augmented by human assistants 

From Aira’s new smart glasses give blind users a guide through the visual world | TechCrunch

Aira has built a service that basically puts a human assistant into a blind user’s ear by beaming live-streaming footage from the glasses camera to the company’s agents who can then give audio instructions to the end users. The guides can present them with directions or describe scenes for them. It’s really the combination of the high-tech hardware and highly attentive assistants.

The hardware the company has run this service on in the past has been a bit of a hodgepodge of third-party solutions. This month, the company began testing its own smart glasses solution called the Horizon Smart Glasses, which are designed from the ground-up to be the ideal solution for vision-impaired users.

The company charges based on usage; $89 per month will get users the device and up to 100 minutes of usage. There are various pricing tiers for power users who need a bit more time.

The glasses integrate a 120-degree wide-angle camera so guides can gain a fuller picture of a user’s surroundings and won’t have to instruct them to point their head in a different direction quite as much. It’s powered by what the startup calls the Aira Horizon Controller, which is actually just a repurposed Samsung smartphone that powers the device in terms of compute, battery and network connection. The controller is appropriately controlled entirely through the physical buttons and also can connect to a user’s smartphone if they want to route controls through the Aira mobile app.

Interesting hybrid implementation and business model, but I have serious doubts that a solution depending on human assistants can scale at a planetary level, or retain the necessary quality of service at that scale.

Eye Tracking For AR Devices?

From Eye Tracking Is Coming to Virtual Reality Sooner Than You Think. What Now? | WIRED

That button had activated the eye-tracking technology of of Tobii, the Swedish company where Karlén is a director of product management for VR. Two cameras inside the headset had begun watching my eyes, illuminating them with near-IR light, and making sure that my avatar’s eyes did exactly what mine did.

Tobii isn’t the only eye-tracking company around, but with 900 employees, it may be the largest. And while the Swedish company has been around since 2006, Qualcomm’s prototype headset—and the latest version of its Snapdragon mobile-VR platform, which it unveiled at the Game Developers Conference in San Francisco this week—marks the first time that eye-tracking is being included in a mass-produced consumer VR device.

and

Eye-tracking unlocks “foveated rendering,” a technique in which graphical fidelity is only prioritized for the tiny portion of the display your pupils are focused on. For Tobii’s version, that’s anywhere from one-tenth to one-sixteenth of the display; everything outside that area can be dialed down as much as 40 or 50 percent without you noticing, which means less load on the graphics processor. VR creators can leverage that luxury in order to coax current-gen performance out of a last-gen GPU, or achieve a higher frame rate than they might otherwise be able to.

That’s just the ones and zeros stuff. There are compelling interface benefits as well. Generally, input in VR is a three-step process: look at something, point at it to select it, then click to input the selection. When your eyes become the selection tool, those first two steps become one. It’s almost like a smartphone, where pointing collapses the selection and click into a single step. And because you’re using your eyes and not your head, that means less head motion, less fatigue, less chance for discomfort.

and

There’s also that whole cameras-watching-your-eyes thing. Watching not just what your eyes are doing, but where they look and for how long—in other words, tracking your attention. That’s the kind of information advertisers and marketers would do just about anything to get their hands on. One study has even shown that gaze-tracking can be (mis)used to influence people’s biases and decision-making.

“We take a very hard, open stance,” he says. “Pictures of your eyes never go to developers—only gaze direction. We do not allow applications to store or transfer eye-tracking data or aggregate over multiple users. It’s not storable, and it doesn’t leave the device.”

Tobii does allow for analytic collection, Werner allows; the company has a business unit focused on working with research facilities and universities. He points to eye-tracking’s potential as a diagnostic tool for autism spectrum disorders, to its applications for phobia research. But anyone using that analytical license, he says, must inform users and make eye-tracking data collection an opt-in process.

There is no reason why eye tracking couldn’t do the same things (and pose the same risks) in AR devices.

MIT Intelligence Quest Launch Event Videos

From MIT IQ Launch

On March 1, we convened at Kresge Auditorium on the MIT campus to set out on the MIT Intelligence Quest — an Institute-wide initiative on human and machine intelligence research, its applications, and its bearing on society.

MIT faculty, alumni, students, and friends talked about their work across all aspects of this domain — from unpublished research, to existing commercial enterprises, to the social and ethical implications of AI.

Learn why and how MIT is primed to take the next breakthrough step in advancing the science and applications of intelligence by clicking on the available presentations below.

MIT announced the Intelligence Quest in February. This is the whole launch event, when dozens of presentations were recorded and are now available online.

Must-watch.

Nectome will preserve your brain, but you have to be euthanized first

From A startup is pitching a mind-uploading service that is “100 percent fatal” – MIT Technology Review

Nectome is a preserve-your-brain-and-upload-it company. Its chemical solution can keep a body intact for hundreds of years, maybe thousands, as a statue of frozen glass. The idea is that someday in the future scientists will scan your bricked brain and turn it into a computer simulation. That way, someone a lot like you, though not exactly you, will smell the flowers again in a data server somewhere.

This story has a grisly twist, though. For Nectome’s procedure to work, it’s essential that the brain be fresh. The company says its plan is to connect people with terminal illnesses to a heart-lung machine in order to pump its mix of scientific embalming chemicals into the big carotid arteries in their necks while they are still alive (though under general anesthesia).

The company has consulted with lawyers familiar with California’s two-year-old End of Life Option Act, which permits doctor-assisted suicide for terminal patients, and believes its service will be legal. The product is “100 percent fatal,”

and

In February, they obtained the corpse of an elderly woman and were able to begin preserving her brain just 2.5 hours after her death. It was the first demonstration of their technique, called aldehyde-stabilized cryopreservation, on a human brain.

Fineas Lupeiu, founder of Aeternitas, a company that arranges for people to donate their bodies to science, confirmed that he provided Nectome with the body. He did not disclose the woman’s age or cause of death, or say how much he charged.

The preservation procedure, which takes about six hours, was carried out at a mortuary. “You can think of what we do as a fancy form of embalming that preserves not just the outer details but the inner details,” says McIntyre. He says the woman’s brain is “one of the best-preserved ever,” although her being dead for even a couple of hours damaged it.

Why Augmented-Reality Glasses Are Ugly

From Why Do Augmented-Reality Glasses Look So Bad? | WIRED

“The battle is between immersive functionality and non-dorky, even cool-looking design. The holy grail is something that not only resembles a normal pair of, say, Gucci glasses, but has functionality that augments your life in a meaningful way.”Right now, that demands a trade-off. The best AR displays require bulky optical hardware to optimize resolution and provide a wide field-of-view. That makes it possible to do all kinds of cool things in augmented reality. But early versions, like the Meta 2 AR headset, look more like an Oculus Rift than a pair of Warby Parkers. Slimmer AR displays, like the used in Google Glass, feel more natural to wear, but they sit above or next to the normal field of vision, so they’re are less immersive and less functional. Adding other features to the glasses—a microphone, a decent camera, various sensors—also increases bulk and makes it harder to create something comfortable or stylish.This tension has split the field of AR glasses into two extremes. On one end, you get hulking glasses packed with features to show off the unbridled potential of augmented reality. On the other end, you sacrifice features to make a wearable that looks and feels more like normal eyewear.

What It’s Like Having to Charge Your Arm

From Never Mind Charging Your Phone: Cyborg Angel Giuffria Explains What It’s Like Having to Charge Your Arm – Core77

At SXSW Angel Giuffria, one of America’s better-known cyborgs, encountered a lot of people that wanted her to demo her robotic arm. As a de facto spokeswoman for the prosthetic community, she gamely agreed, with the result being that her batteries wore down faster than normal.

Be sure to read the whole Q&A session that spontaneously developed over Twitter.

Smart glasses designed to help dyslexic people to read words

From These smart glasses convert words into voice for people who are visually impaired – The Verge

The Oton Glass are glasses with two tiny cameras and an earphone on the sides. Half of the lens is a mirror that reflects the user’s eye so that the inner-facing camera can track eye movements and blinks.Image: Oton GlassUsers will look at some text and blink to capture a photo of what’s in front of them, which gets transmitted to a dedicated Raspberry Pi cloud system, analyzed for text, and then converted into a voice that plays through the earpiece. If the system is unable to read those words, a remote worker would be available to troubleshoot.

The Oton was most recently a third-place runner-up for the James Dyson award in 2016:

There exist similar products in the world, but they are not currently commercialized yet. They require a breakthrough of technology and trial-and-error on how to deploy smart glasses. The originality of OTON GLASS consists of two aspects, technology and deployment. First, in the technology realm, startups such as Orcam Inc. and Hours Technology Inc. are currently developing smart glasses for blind people. They mainly develop powerful OCR for the English (Alphabet) using machine learning techniques. On the other hand, OTON GLASS focuses on Japanese character recognition as its unique aspect. OTON GLASS aims to solve the user’s problems by becoming a hybrid (human-to-computer) recognizer and not approaching the problem using OCR Technology. Secondly, in terms deployment, OTON GLASS is all in one that combines camera-to-glasses – meaning they look like normal glasses. This capture trigger based on human’s behavior is natural interaction for people.

China accounted for 48 % of the world’s total AI startup funding in 2017, surpassing the US

From China overtakes US in AI startup funding with a focus on facial recognition and chips – The Verge

The competition between China and the US in AI development is tricky to quantify. While we do have some hard numbers, even they are open to interpretation. The latest comes from technology analysts CB Insights, which reports that China has overtaken the US in the funding of AI startups. The country accounted for 48 percent of the world’s total AI startup funding in 2017, compared to 38 percent for the US.

It’s not a straightforward victory for China, however. In terms of the volume of individual deals, the country only accounts for 9 percent of the total, while the US leads in both the total number of AI startups and total funding overall. The bottom line is that China is ahead when it comes to the dollar value of AI startup funding, which CB Insights says shows the country is “aggressively executing a thoroughly-designed vision for AI.”

I know the guys at CB Insights. Pretty reliable research firm.

AI can predict a heart disease looking at eyes blood vessels with 70% accuracy

From Google’s new AI algorithm predicts heart disease by looking at your eyes – The Verge

Scientists from Google and its health-tech subsidiary Verily have discovered a new way to assess a person’s risk of heart disease using machine learning. By analyzing scans of the back of a patient’s eye, the company’s software is able to accurately deduce data, including an individual’s age, blood pressure, and whether or not they smoke. This can then be used to predict their risk of suffering a major cardiac event — such as a heart attack — with roughly the same accuracy as current leading methods.

and

To train the algorithm, Google and Verily’s scientists used machine learning to analyze a medical dataset of nearly 300,000 patients. This information included eye scans as well as general medical data. As with all deep learning analysis, neural networks were then used to mine this information for patterns, learning to associate telltale signs in the eye scans with the metrics needed to predict cardiovascular risk (e.g., age and blood pressure).

and

When presented with retinal images of two patients, one of whom suffered a cardiovascular event in the following five years, and one of whom did not, Google’s algorithm was able to tell which was which 70 percent of the time. This is only slightly worse than the commonly used SCORE method of predicting cardiovascular risk, which requires a blood test and makes correct predictions in the same test 72 percent of the time.

Now, if you equip a pair of smart glasses with a scanner, you are basically going around with an AI that looks around you and inside you. At the same time. What are the implications?

Self-healing and recyclable electronic skin 

From This electronic skin can heal itself — and then make more skin – The Verge

researchers have created an electronic skin that can be completely recycled. The e-skin can also heal itself if it’s torn apart.The device, described today in the journal Science Advances, is basically a thin film equipped with sensors that can measure pressure, temperature, humidity, and air flow. The film is made of three commercially available compounds mixed together in a matrix and laced with silver nanoparticles: when the e-skin is cut in two, adding the three compounds to the “wound” allows the e-skin to heal itself by recreating chemical bonds between the two sides. That way, the matrix is restored and the e-skin is as good as new. If the e-skin is broken beyond repair, it can just be soaked in a solution that “liquefies” it so that the materials can be reused to make new e-skin. One day, this electronic skin could be used in prosthetics, robots, or smart textiles.

Nanorobots have potential as intelligent drug delivery systems

From New DNA nanorobots successfully target and kill off cancerous tumors | TechCrunch

Using tumor-bearing mouse models, we demonstrate that intravenously injected DNA nanorobots deliver thrombin specifically to tumor-associated blood vessels and induce intravascular thrombosis, resulting in tumor necrosis and inhibition of tumor growth

and

DNA nanorobots are a somewhat new concept for drug delivery. They work by getting programmed DNA to fold into itself like origami and then deploying it like a tiny machine, ready for action.

The [chinese] scientists behind this study tested the delivery bots by injecting them into mice with human breast cancer tumors. Within 48 hours, the bots had successfully grabbed onto vascular cells at the tumor sites, causing blood clots in the tumor’s vessels and cutting off their blood supply, leading to their death.

Remarkably, the bots did not cause clotting in other parts of the body, just the cancerous cells they’d been programmed to target

CRISPR pioneers now use it to detect infections like HPV, dengue, and Zika

From New CRISPR tools can detect infections like HPV, dengue, and Zika – The Verge

The new tools, developed by the labs of CRISPR pioneers Jennifer Doudna and Feng Zhang, are showcased in two studies published today in the journal Science. In one paper, Doudna’s team describes a system called DETECTR, which can accurately identify different types of the HPV virus in human samples. In the second paper, Zhang’s team shows an upgraded version of SHERLOCK — which was shown last year to detect viruses like Zika and dengue, as well as other harmful bacteria — in human samples.

and

The CRISPR used in the first Science study is called CRISPR-Cas12a. Doudna’s team discovered that when this type of CRISPR snips double-stranded DNA, it does something interesting: it starts shredding single-stranded DNA as well

the CRISPR system is programmed to detect the HPV DNA inside a person’s cells. When CRISPR detects it, it also cuts a “reporter molecule” with single-stranded DNA that releases a fluorescent signal. So if the cells are infected with HPV, scientists are able to see the signal and quickly diagnose a patient. For now, DETECTR was tested in a tube containing DNA from infected human cells, showing it could detect HPV16 with 100 percent accuracy, and HPV18 with 92 percent accuracy.

and

Called SHERLOCK, this system uses a variety of CRISPR enzymes, including Cas12a. Last year, Zhang’s team showed that SHERLOCK uses CRISPR-Cas13a to find the genetic sequence of Zika, dengue, and several other bacteria, as well as the sequences associated with a cancer mutation in a variety of human samples, such as saliva. Now, the team has improved the tool to be 100 times more sensitive and detect multiple viruses — such as Zika and dengue — in one sample simultaneously. It does this by combining different types of CRISPR enzymes, which are unleashed together to target distinct bits of DNA and RNA, another of the major biological molecules found in all forms of life. Some enzymes also work together to make the tool more sensitive.

If you read Doudna’s book, featured in the H+ “Key Books” section, you realise the enormous progress we made in the last 10 years in terms of DNA manipulation thanks to CRISPR, and yet you have a clear understanding that we are just scratching the surface of what is possible.

Sequence your genome for less than $1,000 and sell it via blockchain

From Human sequencing pioneer George Church wants to give you the power to sell your DNA on the blockchain | TechCrunch

MIT professor and godfather of the Human Genome Project George Church wants to put your genes on it.

His new startup Nebula Genomics plans to sequence your genome for less than $1,000 (the current going rate of whole genome sequencing) and then add your data to the blockchain through the purchase of a “Nebula Token.”

Church and his colleagues laid out in a recently released white paper that this will put the genomic power in the hands of the consumer, as opposed to companies like 23andMe and AncestryDNA, which own your genomic data after you take that spit tube test.

These companies sell that data in large swaths to pharmaceutical and research companies, often for millions of dollars. However, using the blockchain, consumers can choose to sell their own data directly.

and

Those buying up tokens and sequencing their DNA through Nebula don’t have to sell it for money, of course, and Nebula says they can still discover insights about their own genetics through the company app without sharing it elsewhere, if they desire.

However, all bought and sold data will be recorded on the blockchain, which is a technology allowing for the recording of all transactions using a key code known only to the person who holds the information.

Two thoughts:

  • If this idea generates even a tiny bit of money for each individual involved, it might unlock unprecedented access to genetic information for advanced engineering.
  • Our genome is the second last thing we’ve left to sell. The last one is our attention. But once they have our genome, our attention may come for free.

A biohacker injected himself with a DIY herpes treatment in front of a conference audience

From A biohacker injected himself with a DIY herpes treatment in front of a live audience – The Verge

Aaron Traywick, 28, who leads biotech firm Ascendance Biomedical, used an experimental herpes treatment that did not go through the typical route of clinical trials to test its safety.

Instead of being developed by research scientists in laboratories, it was created by a biohacker named Andreas Stuermer, who “holds a masters degree and is a bioentrepreneur and science lover,” according to a conference bio. This is typical of the Ascendance approach. The company believes that FDA regulations for developing treatments are too slow and that having biohackers do the research and experiment on themselves can speed up the process to everyone’s benefit. In the past, the company’s plans have included trying to reverse menopause, a method that is now actually in clinical trials.

From Biohackers Disregard FDA Warning on DIY Gene Therapy – MIT Technology Review

Experts say any gene therapy prepared by amateurs would probably not be potent enough to have much effect, but it could create risks such as an immune reaction to the foreign DNA. “I think warning people about this is the right thing,” says David Gortler, a drug safety expert with the consulting group Former FDA. “The bottom line is, this hasn’t been tested.”

The problem facing regulators is that interest in biohacking is spreading, and it’s increasingly easy for anyone to obtain DNA over the internet.

The last sentence is key. As in the tech industry, once you trigger bottom-up adoption the process is irreversible. And disruptive.

Police in China have begun using sunglasses equipped with facial recognition technology

From Chinese police spot suspects with surveillance sunglasses – BBC News

The glasses are connected to an internal database of suspects, meaning officers can quickly scan crowds while looking for fugitives.

The sunglasses have already helped police capture seven suspects, according to Chinese state media.

The seven people who were apprehended are accused of crimes ranging from hit-and-runs to human trafficking.

and

The technology allows police officers to take a photograph of a suspicious individual and then compare it to pictures stored in an internal database. If there is a match, information such as the person’s name and address will then be sent to the officer.

An estimated 170 million CCTV cameras are already in place and some 400 million new ones are expected be installed in the next three years.

Many of the cameras use artificial intelligence, including facial recognition technology.

In December 2017, I published Our Machines Can Very Easily Recognise You Among At Least 2 Billion People in a Matter of Seconds. It didn’t take long to go from press claims to real-world implementation.

Human augmentation 2.0 is already here, just not evenly distributed.

MIT launches Intelligence Quest, an initiative to discover the foundations of human intelligence

From Institute launches the MIT Intelligence Quest | MIT News

At a time of rapid advances in intelligence research across many disciplines, the Intelligence Quest — MIT IQ — will encourage researchers to investigate the societal implications of their work as they pursue hard problems lying beyond the current horizon of what is known.

Some of these advances may be foundational in nature, involving new insight into human intelligence, and new methods to allow machines to learn effectively. Others may be practical tools for use in a wide array of research endeavors, such as disease diagnosis, drug discovery, materials and manufacturing design, automated systems, synthetic biology, and finance.

and

MIT is poised to lead this work through two linked entities within MIT IQ. One of them, “The Core,” will advance the science and engineering of both human and machine intelligence. A key output of this work will be machine-learning algorithms. At the same time, MIT IQ seeks to advance our understanding of human intelligence by using insights from computer science.

The second entity, “The Bridge” will be dedicated to the application of MIT discoveries in natural and artificial intelligence to all disciplines, and it will host state-of-the-art tools from industry and research labs worldwide.

The Bridge will provide a variety of assets to the MIT community, including intelligence technologies, platforms, and infrastructure; education for students, faculty, and staff about AI tools; rich and unique data sets; technical support; and specialized hardware

and

In order to power MIT IQ and achieve results that are consistent with its ambitions, the Institute will raise financial support through corporate sponsorship and philanthropic giving.

MIT IQ will build on the model that was established with the MIT–IBM Watson AI Lab

What a phenomenal initiative. And MIT is one of the top places in the world to be for AI research.

Artificial General Intelligence might come out of this project.

Ultimately we want a (neuromorphic) chip as big as a fingernail to replace one big (AI) supercomputer

From Engineers design artificial synapse for “brain-on-a-chip” hardware | MIT News

Researchers in the emerging field of “neuromorphic computing” have attempted to design computer chips that work like the human brain. Instead of carrying out computations based on binary, on/off signaling, like digital chips do today, the elements of a “brain on a chip” would work in an analog fashion, exchanging a gradient of signals, or “weights,” much like neurons that activate in various ways depending on the type and number of ions that flow across a synapse.

In this way, small neuromorphic chips could, like the brain, efficiently process millions of streams of parallel computations that are currently only possible with large banks of supercomputers. But one significant hangup on the way to such portable artificial intelligence has been the neural synapse, which has been particularly tricky to reproduce in hardware.

Now engineers at MIT have designed an artificial synapse in such a way that they can precisely control the strength of an electric current flowing across it, similar to the way ions flow between neurons. The team has built a small chip with artificial synapses, made from silicon germanium. In simulations, the researchers found that the chip and its synapses could be used to recognize samples of handwriting, with 95 percent accuracy

and

Instead of using amorphous materials as an artificial synapse, Kim and his colleagues looked to single-crystalline silicon, a defect-free conducting material made from atoms arranged in a continuously ordered alignment. The team sought to create a precise, one-dimensional line defect, or dislocation, through the silicon, through which ions could predictably flow.

The researchers fabricated a neuromorphic chip consisting of artificial synapses made from silicon germanium, each synapse measuring about 25 nanometers across. They applied voltage to each synapse and found that all synapses exhibited more or less the same current, or flow of ions, with about a 4 percent variation between synapses — a much more uniform performance compared with synapses made from amorphous material.

They also tested a single synapse over multiple trials, applying the same voltage over 700 cycles, and found the synapse exhibited the same current, with just 1 percent variation from cycle to cycle.

Commercialization is very far away from this, but what we are talking here is building the foundation for artificial general intelligence (AGI), and before that, for narrow AI that can be embedded in clothes and everyday objects, not just in smartphones and other electronic devices.

Imagine the possibilities if an AI chip would be as cheap, small and ubiquitous as Bluetooth chips are today.

Undetectable mass social influence powered by artificial intelligence is an existential threat to humanity

Julian Assange on Twitter

The future of humanity is the struggle between humans that control machines and machines that control humans.
While the internet has brought about a revolution in our ability to educate each other, the consequent democratic explosion has shaken existing establishments to their core. Burgeoning digital super states such as Google, Facebook and their Chinese equivalents, who are integrated with the existing order, have moved to re-establish discourse control. This is not simply a corrective action. Undetectable mass social influence powered by artificial intelligence is an existential threat to humanity.
While still in its infancy, the geometric nature of this trend is clear. The phenomenon differs from traditional attempts to shape culture and politics by operating at a scale, speed, and increasingly at a subtlety, that appears highly likely to eclipse human counter-measures.
Nuclear war, climate change or global pandemics are existential threats that we can work through with discussion and thought. Discourse is humanity’s immune system for existential threats. Diseases that infect the immune system are usually fatal. In this case, at a planetary scale.

Self-doubting AI vs certain AI

From Google and Others Are Building AI Systems That Doubt Themselves – MIT Technology Review

Researchers at Uber and Google are working on modifications to the two most popular deep-learning frameworks that will enable them to handle probability. This will provide a way for the smartest AI programs to measure their confidence in a prediction or a decision—essentially, to know when they should doubt themselves.

and

“You would like a system that gives you a measure of how certain it is,” says Dustin Tran, who is working on this problem at Google. “If a self-driving car doesn’t know its level of uncertainty, it can make a fatal error, and that can be catastrophic.”

and

Goodman explains that giving deep learning the ability to handle probability can make it smarter in several ways. It could, for instance, help a program recognize things, with a reasonable degree of certainty, from just a few examples rather than many thousands. Offering a measure of certainty rather than a yes-or-no answer should also help with engineering complex systems.

Improving brain-computer interfaces by decrypting neural patterns

From Cracking the Brain’s Enigma Code – Scientific American

Many human movements, such as walking or reaching, follow predictable patterns, too. Limb position, speed and several other movement features tend to play out in an orderly way. With this regularity in mind, Eva Dyer, a neuroscientist at the Georgia Institute of Technology, decided to try a cryptography-inspired strategy for neural decoding.

Existing brain-computer interfaces typically use so-called ‘supervised decoders.’ These algorithms rely on detailed moment-by-moment movement information such as limb position and speed, which is collected simultaneously with recorded neural activity. Gathering these data can be a time-consuming, laborious process. This information is then used to train the decoder to translate neural patterns into their corresponding movements. (In cryptography terms, this would be like comparing a number of already decrypted messages to their encrypted versions to reverse-engineer the key.)

By contrast, Dyer’s team sought to predict movements using only the encrypted messages (the neural activity), and a general understanding of the patterns that pop up in certain movements.

and

Her team trained three macaque monkeys to either reach their arm or bend their wrist to guide a cursor to a number of targets arranged about a central point. At the same time, the researchers used implanted electrode arrays to record the activity of about 100 neurons in each monkey’s motor cortex, a key brain region that controls movement.

To find their decoding algorithm, the researchers performed an analysis on the neural activity to extract and pare down its core mathematical structure. Then they tested a slew of computational models to find the one that most closely aligned the neural patterns to the movement patterns.

and

Because Dyer’s decoder only required general statistics about movements, which tend to be similar across animals or across people, the researchers were also able to use movement patterns from one monkey to decipher reaches from the neural data of another monkey—something that is not feasible with traditional supervised decoders.