US Department of Defense has 592 projects powered by Artificial Intelligence

From Pentagon developing artificial intelligence center

Speaking at the House Armed Services Committee April 12, Mattis said “we’re looking at a joint office where we would concentrate all of DoD’s efforts, since we have a number of AI efforts underway right now. We’re looking at pulling them all together.”

He added that the department counts 592 projects as having some form of AI in them, but noted that not all of those make sense to tie into an AI center. And Griffin wants to make sure smaller projects that are close to completion get done and out into prototyping, rather than tied up in the broader AI project.

And then, of course, there are those AI projects so secret that they won’t even be listed among those 592. It would be interesting to see how many of these relate to the super-soldier use case.

A brain-scanner could be an instrument of explicit coercion

From TED 2018: Thought-Reading Machines and the Death of Love | WIRED

The San Francisco startup is developing an optical imaging system—sufficiently compact to fit inside a skull cap, wand, or bandage—that scatters and captures near-infrared light inside our bodies to create holograms that reveal our occluded selves. The devices could diagnose cancers as well as cardiovascular or other diseases. But because the wavelength of near-infrared light is smaller than a micron, smaller than the smallest neuron, Jepsen believes the resolution of the technology is fine enough to make thoughts visible to.

and

the company’s promise depended on combining these elements: proof of the entire body’s translucence; holographic techniques, some dating to the 1960s; and Asian silicon manufacturing, which can make new chip architectures into commercial products. Openwater may be less than two years old, but Jepsen has been thinking about a holographic scanner for decades. She is uniquely suited to the challenge. Her early research was in holography; she led display development at Intel, Google X, and Facebook Oculus; and she has shipped billions of dollars of chips.

and

The idea derives from Jack Gallant, a cognitive neuroscientist at UC Berkeley, who decoded movies shown to subjects in a functional MRI machine by scanning the oxygenated blood in their brains. The images Gallant recovered are blurry, because the resolution of fMRI is comparatively coarse. Holography would not only see blood better but capture the electrochemical pulses of the neurons themselves.

Wearable device picks up neuromuscular signals saying words “in your head”

From Computer system transcribes words users “speak silently” | MIT News

MIT researchers have developed a computer interface that can transcribe words that the user verbalizes internally but does not actually speak aloud.

The system consists of a wearable device and an associated computing system. Electrodes in the device pick up neuromuscular signals in the jaw and face that are triggered by internal verbalizations — saying words “in your head” — but are undetectable to the human eye. The signals are fed to a machine-learning system that has been trained to correlate particular signals with particular words.

The device also includes a pair of bone-conduction headphones, which transmit vibrations through the bones of the face to the inner ear. Because they don’t obstruct the ear canal, the headphones enable the system to convey information to the user without interrupting conversation or otherwise interfering with the user’s auditory experience.

and

Using the prototype wearable interface, the researchers conducted a usability study in which 10 subjects spent about 15 minutes each customizing the arithmetic application to their own neurophysiology, then spent another 90 minutes using it to execute computations. In that study, the system had an average transcription accuracy of about 92 percent.

But, Kapur says, the system’s performance should improve with more training data, which could be collected during its ordinary use. Although he hasn’t crunched the numbers, he estimates that the better-trained system he uses for demonstrations has an accuracy rate higher than that reported in the usability study.

Sci-fi movies shaped the collective imaginary about neural interfaces as some sort of hardware port or dongle sticking out of the neck and connecting the human brain to the Internet. But that approach, assuming it’s even possible, is still far away into the future.

This approach is much more feasible. Imagine if this object, AlterEgo, would become the main computer peripheral, replacing keyboard and mouse.
The question is not just about the accuracy, but also how its speed compared to existing input methods.

Watch the video.

From 3D Printing to Bioprinting and Precision Medicine

From How 3D printing is revolutionizing healthcare as we know it | TechCrunch

3D printing is performed by telling a computer to apply layer upon layer of a specific material (quite often plastic or metal powders), molding them one layer at a time until the final product — be it a toy, a pair of sunglasses or a scoliosis brace — is built. Medical technology is now harnessing this technology and building tiny organs, or “organoids,” using the same techniques, but with stem cells as the production material. These organoids, once built, will in the future be able to grow inside the body of a sick patient and take over when an organic organ, such as a kidney or liver, fails.

researchers in Spain have now taken the mechanics of 3D printing — that same careful layer-upon-layer approach in which we can make just about anything — and revealed a 3D bioprinter prototype that can produce human skin. The researchers, working with a biological ink that contains both human plasma as well as material extracts taken from skin biopsies, were able to print about 100 square centimeters of human skin in the span of about half an hour.

and

A 3D-printed pill, unlike a traditionally manufactured capsule, can house multiple drugs at once, each with different release times. This so-called “polypill” concept has already been tested for patients with diabetes and is showing great promise.

An exoskeleton for athletes and older skiers

From This Affordable Exoskeleton Can Make You A Better Skier

Roam’s founder and CEO is Tim Swift, a longtime veteran of Ekso Bionics, one of the world’s leaders in exoskeletons. Swift loved what Ekso was building, but balked at the hefty price tag that came with systems designed to help the disabled walk. Building devices that aren’t accessible to the masses didn’t make sense to him anymore. So he struck out on his own, aiming to democratize exoskeletons.

and

Roam is using plastics and fabrics, and air for transmission. The company’s core insight, Swift says, is a unique fabric actuator that’s very lightweight, yet strong for its volume and weight. The system relies on valves and a backpack power pack to provide torque to the legs. It also has a machine learning element that’s meant to understand how you ski, and anticipate when you’re going to make a turn in order to deliver the extra torque just when you want it.

When ready for market, the skiing exoskeleton is expected to weigh under 10 pounds, including about four or five pounds of equipment that goes in the backpack.

From This skiing exoskeleton is designed to take the strain off your legs – The Verge

The company claims the exoskeleton will make older skiers feel years younger and able to stay out on the slope for longer. And for athletes, the device will supposedly help them train for days in a row with less fatigue.

So far the company has only built prototypes, but it’s in the process of finalizing a commercial product, set for release in January 2019. Interested skiers can pay $99 to reserve a unit, although the final price is expected to be somewhere between $2,000 and $2,500.

Exoskeletons have a few of clear use cases: people with disabilities, heavy lifting workers, and supersoldiers. Athletes and healthy people that want to enjoy sports in their later years are interesting new possibilities.

MIT terminates collaboration with Nectome

From MIT severs ties to company promoting fatal brain uploading – MIT Technology Review

According to an April 2 statement, MIT will terminate Nectome’s research contract with Media Lab professor and neuroscientist Edward Boyden.

MIT’s connection to the company drew sharp criticism from some neuroscientists, who say brain uploading isn’t possible.

“Fundamentally, the company is based on a proposition that is just false. It is something that just can’t happen,” says Sten Linnarsson of the Karolinska Institute in Sweden.

He adds that by collaborating with Nectome, MIT had lent credibility to the startup and increased the chance that “some people actually kill themselves to donate their brains.”

It didn’t take long.

It’s hard enough to stand the pressure of the press and public opinion for normal companies. It must be impossibly hard to do so when you try to commercialize an attempt to escape death.

Many of the companies that are covered here on H+ face the same challenge.

AR glasses further augmented by human assistants 

From Aira’s new smart glasses give blind users a guide through the visual world | TechCrunch

Aira has built a service that basically puts a human assistant into a blind user’s ear by beaming live-streaming footage from the glasses camera to the company’s agents who can then give audio instructions to the end users. The guides can present them with directions or describe scenes for them. It’s really the combination of the high-tech hardware and highly attentive assistants.

The hardware the company has run this service on in the past has been a bit of a hodgepodge of third-party solutions. This month, the company began testing its own smart glasses solution called the Horizon Smart Glasses, which are designed from the ground-up to be the ideal solution for vision-impaired users.

The company charges based on usage; $89 per month will get users the device and up to 100 minutes of usage. There are various pricing tiers for power users who need a bit more time.

The glasses integrate a 120-degree wide-angle camera so guides can gain a fuller picture of a user’s surroundings and won’t have to instruct them to point their head in a different direction quite as much. It’s powered by what the startup calls the Aira Horizon Controller, which is actually just a repurposed Samsung smartphone that powers the device in terms of compute, battery and network connection. The controller is appropriately controlled entirely through the physical buttons and also can connect to a user’s smartphone if they want to route controls through the Aira mobile app.

Interesting hybrid implementation and business model, but I have serious doubts that a solution depending on human assistants can scale at a planetary level, or retain the necessary quality of service at that scale.

Eye Tracking For AR Devices?

From Eye Tracking Is Coming to Virtual Reality Sooner Than You Think. What Now? | WIRED

That button had activated the eye-tracking technology of of Tobii, the Swedish company where Karlén is a director of product management for VR. Two cameras inside the headset had begun watching my eyes, illuminating them with near-IR light, and making sure that my avatar’s eyes did exactly what mine did.

Tobii isn’t the only eye-tracking company around, but with 900 employees, it may be the largest. And while the Swedish company has been around since 2006, Qualcomm’s prototype headset—and the latest version of its Snapdragon mobile-VR platform, which it unveiled at the Game Developers Conference in San Francisco this week—marks the first time that eye-tracking is being included in a mass-produced consumer VR device.

and

Eye-tracking unlocks “foveated rendering,” a technique in which graphical fidelity is only prioritized for the tiny portion of the display your pupils are focused on. For Tobii’s version, that’s anywhere from one-tenth to one-sixteenth of the display; everything outside that area can be dialed down as much as 40 or 50 percent without you noticing, which means less load on the graphics processor. VR creators can leverage that luxury in order to coax current-gen performance out of a last-gen GPU, or achieve a higher frame rate than they might otherwise be able to.

That’s just the ones and zeros stuff. There are compelling interface benefits as well. Generally, input in VR is a three-step process: look at something, point at it to select it, then click to input the selection. When your eyes become the selection tool, those first two steps become one. It’s almost like a smartphone, where pointing collapses the selection and click into a single step. And because you’re using your eyes and not your head, that means less head motion, less fatigue, less chance for discomfort.

and

There’s also that whole cameras-watching-your-eyes thing. Watching not just what your eyes are doing, but where they look and for how long—in other words, tracking your attention. That’s the kind of information advertisers and marketers would do just about anything to get their hands on. One study has even shown that gaze-tracking can be (mis)used to influence people’s biases and decision-making.

“We take a very hard, open stance,” he says. “Pictures of your eyes never go to developers—only gaze direction. We do not allow applications to store or transfer eye-tracking data or aggregate over multiple users. It’s not storable, and it doesn’t leave the device.”

Tobii does allow for analytic collection, Werner allows; the company has a business unit focused on working with research facilities and universities. He points to eye-tracking’s potential as a diagnostic tool for autism spectrum disorders, to its applications for phobia research. But anyone using that analytical license, he says, must inform users and make eye-tracking data collection an opt-in process.

There is no reason why eye tracking couldn’t do the same things (and pose the same risks) in AR devices.

MIT Intelligence Quest Launch Event Videos

From MIT IQ Launch

On March 1, we convened at Kresge Auditorium on the MIT campus to set out on the MIT Intelligence Quest — an Institute-wide initiative on human and machine intelligence research, its applications, and its bearing on society.

MIT faculty, alumni, students, and friends talked about their work across all aspects of this domain — from unpublished research, to existing commercial enterprises, to the social and ethical implications of AI.

Learn why and how MIT is primed to take the next breakthrough step in advancing the science and applications of intelligence by clicking on the available presentations below.

MIT announced the Intelligence Quest in February. This is the whole launch event, when dozens of presentations were recorded and are now available online.

Must-watch.

Nectome will preserve your brain, but you have to be euthanized first

From A startup is pitching a mind-uploading service that is “100 percent fatal” – MIT Technology Review

Nectome is a preserve-your-brain-and-upload-it company. Its chemical solution can keep a body intact for hundreds of years, maybe thousands, as a statue of frozen glass. The idea is that someday in the future scientists will scan your bricked brain and turn it into a computer simulation. That way, someone a lot like you, though not exactly you, will smell the flowers again in a data server somewhere.

This story has a grisly twist, though. For Nectome’s procedure to work, it’s essential that the brain be fresh. The company says its plan is to connect people with terminal illnesses to a heart-lung machine in order to pump its mix of scientific embalming chemicals into the big carotid arteries in their necks while they are still alive (though under general anesthesia).

The company has consulted with lawyers familiar with California’s two-year-old End of Life Option Act, which permits doctor-assisted suicide for terminal patients, and believes its service will be legal. The product is “100 percent fatal,”

and

In February, they obtained the corpse of an elderly woman and were able to begin preserving her brain just 2.5 hours after her death. It was the first demonstration of their technique, called aldehyde-stabilized cryopreservation, on a human brain.

Fineas Lupeiu, founder of Aeternitas, a company that arranges for people to donate their bodies to science, confirmed that he provided Nectome with the body. He did not disclose the woman’s age or cause of death, or say how much he charged.

The preservation procedure, which takes about six hours, was carried out at a mortuary. “You can think of what we do as a fancy form of embalming that preserves not just the outer details but the inner details,” says McIntyre. He says the woman’s brain is “one of the best-preserved ever,” although her being dead for even a couple of hours damaged it.

Why Augmented-Reality Glasses Are Ugly

From Why Do Augmented-Reality Glasses Look So Bad? | WIRED

“The battle is between immersive functionality and non-dorky, even cool-looking design. The holy grail is something that not only resembles a normal pair of, say, Gucci glasses, but has functionality that augments your life in a meaningful way.”Right now, that demands a trade-off. The best AR displays require bulky optical hardware to optimize resolution and provide a wide field-of-view. That makes it possible to do all kinds of cool things in augmented reality. But early versions, like the Meta 2 AR headset, look more like an Oculus Rift than a pair of Warby Parkers. Slimmer AR displays, like the used in Google Glass, feel more natural to wear, but they sit above or next to the normal field of vision, so they’re are less immersive and less functional. Adding other features to the glasses—a microphone, a decent camera, various sensors—also increases bulk and makes it harder to create something comfortable or stylish.This tension has split the field of AR glasses into two extremes. On one end, you get hulking glasses packed with features to show off the unbridled potential of augmented reality. On the other end, you sacrifice features to make a wearable that looks and feels more like normal eyewear.

What It’s Like Having to Charge Your Arm

From Never Mind Charging Your Phone: Cyborg Angel Giuffria Explains What It’s Like Having to Charge Your Arm – Core77

At SXSW Angel Giuffria, one of America’s better-known cyborgs, encountered a lot of people that wanted her to demo her robotic arm. As a de facto spokeswoman for the prosthetic community, she gamely agreed, with the result being that her batteries wore down faster than normal.

Be sure to read the whole Q&A session that spontaneously developed over Twitter.

Smart glasses designed to help dyslexic people to read words

From These smart glasses convert words into voice for people who are visually impaired – The Verge

The Oton Glass are glasses with two tiny cameras and an earphone on the sides. Half of the lens is a mirror that reflects the user’s eye so that the inner-facing camera can track eye movements and blinks.Image: Oton GlassUsers will look at some text and blink to capture a photo of what’s in front of them, which gets transmitted to a dedicated Raspberry Pi cloud system, analyzed for text, and then converted into a voice that plays through the earpiece. If the system is unable to read those words, a remote worker would be available to troubleshoot.

The Oton was most recently a third-place runner-up for the James Dyson award in 2016:

There exist similar products in the world, but they are not currently commercialized yet. They require a breakthrough of technology and trial-and-error on how to deploy smart glasses. The originality of OTON GLASS consists of two aspects, technology and deployment. First, in the technology realm, startups such as Orcam Inc. and Hours Technology Inc. are currently developing smart glasses for blind people. They mainly develop powerful OCR for the English (Alphabet) using machine learning techniques. On the other hand, OTON GLASS focuses on Japanese character recognition as its unique aspect. OTON GLASS aims to solve the user’s problems by becoming a hybrid (human-to-computer) recognizer and not approaching the problem using OCR Technology. Secondly, in terms deployment, OTON GLASS is all in one that combines camera-to-glasses – meaning they look like normal glasses. This capture trigger based on human’s behavior is natural interaction for people.

China accounted for 48 % of the world’s total AI startup funding in 2017, surpassing the US

From China overtakes US in AI startup funding with a focus on facial recognition and chips – The Verge

The competition between China and the US in AI development is tricky to quantify. While we do have some hard numbers, even they are open to interpretation. The latest comes from technology analysts CB Insights, which reports that China has overtaken the US in the funding of AI startups. The country accounted for 48 percent of the world’s total AI startup funding in 2017, compared to 38 percent for the US.

It’s not a straightforward victory for China, however. In terms of the volume of individual deals, the country only accounts for 9 percent of the total, while the US leads in both the total number of AI startups and total funding overall. The bottom line is that China is ahead when it comes to the dollar value of AI startup funding, which CB Insights says shows the country is “aggressively executing a thoroughly-designed vision for AI.”

I know the guys at CB Insights. Pretty reliable research firm.

AI can predict a heart disease looking at eyes blood vessels with 70% accuracy

From Google’s new AI algorithm predicts heart disease by looking at your eyes – The Verge

Scientists from Google and its health-tech subsidiary Verily have discovered a new way to assess a person’s risk of heart disease using machine learning. By analyzing scans of the back of a patient’s eye, the company’s software is able to accurately deduce data, including an individual’s age, blood pressure, and whether or not they smoke. This can then be used to predict their risk of suffering a major cardiac event — such as a heart attack — with roughly the same accuracy as current leading methods.

and

To train the algorithm, Google and Verily’s scientists used machine learning to analyze a medical dataset of nearly 300,000 patients. This information included eye scans as well as general medical data. As with all deep learning analysis, neural networks were then used to mine this information for patterns, learning to associate telltale signs in the eye scans with the metrics needed to predict cardiovascular risk (e.g., age and blood pressure).

and

When presented with retinal images of two patients, one of whom suffered a cardiovascular event in the following five years, and one of whom did not, Google’s algorithm was able to tell which was which 70 percent of the time. This is only slightly worse than the commonly used SCORE method of predicting cardiovascular risk, which requires a blood test and makes correct predictions in the same test 72 percent of the time.

Now, if you equip a pair of smart glasses with a scanner, you are basically going around with an AI that looks around you and inside you. At the same time. What are the implications?

Self-healing and recyclable electronic skin 

From This electronic skin can heal itself — and then make more skin – The Verge

researchers have created an electronic skin that can be completely recycled. The e-skin can also heal itself if it’s torn apart.The device, described today in the journal Science Advances, is basically a thin film equipped with sensors that can measure pressure, temperature, humidity, and air flow. The film is made of three commercially available compounds mixed together in a matrix and laced with silver nanoparticles: when the e-skin is cut in two, adding the three compounds to the “wound” allows the e-skin to heal itself by recreating chemical bonds between the two sides. That way, the matrix is restored and the e-skin is as good as new. If the e-skin is broken beyond repair, it can just be soaked in a solution that “liquefies” it so that the materials can be reused to make new e-skin. One day, this electronic skin could be used in prosthetics, robots, or smart textiles.

Nanorobots have potential as intelligent drug delivery systems

From New DNA nanorobots successfully target and kill off cancerous tumors | TechCrunch

Using tumor-bearing mouse models, we demonstrate that intravenously injected DNA nanorobots deliver thrombin specifically to tumor-associated blood vessels and induce intravascular thrombosis, resulting in tumor necrosis and inhibition of tumor growth

and

DNA nanorobots are a somewhat new concept for drug delivery. They work by getting programmed DNA to fold into itself like origami and then deploying it like a tiny machine, ready for action.

The [chinese] scientists behind this study tested the delivery bots by injecting them into mice with human breast cancer tumors. Within 48 hours, the bots had successfully grabbed onto vascular cells at the tumor sites, causing blood clots in the tumor’s vessels and cutting off their blood supply, leading to their death.

Remarkably, the bots did not cause clotting in other parts of the body, just the cancerous cells they’d been programmed to target

CRISPR pioneers now use it to detect infections like HPV, dengue, and Zika

From New CRISPR tools can detect infections like HPV, dengue, and Zika – The Verge

The new tools, developed by the labs of CRISPR pioneers Jennifer Doudna and Feng Zhang, are showcased in two studies published today in the journal Science. In one paper, Doudna’s team describes a system called DETECTR, which can accurately identify different types of the HPV virus in human samples. In the second paper, Zhang’s team shows an upgraded version of SHERLOCK — which was shown last year to detect viruses like Zika and dengue, as well as other harmful bacteria — in human samples.

and

The CRISPR used in the first Science study is called CRISPR-Cas12a. Doudna’s team discovered that when this type of CRISPR snips double-stranded DNA, it does something interesting: it starts shredding single-stranded DNA as well

the CRISPR system is programmed to detect the HPV DNA inside a person’s cells. When CRISPR detects it, it also cuts a “reporter molecule” with single-stranded DNA that releases a fluorescent signal. So if the cells are infected with HPV, scientists are able to see the signal and quickly diagnose a patient. For now, DETECTR was tested in a tube containing DNA from infected human cells, showing it could detect HPV16 with 100 percent accuracy, and HPV18 with 92 percent accuracy.

and

Called SHERLOCK, this system uses a variety of CRISPR enzymes, including Cas12a. Last year, Zhang’s team showed that SHERLOCK uses CRISPR-Cas13a to find the genetic sequence of Zika, dengue, and several other bacteria, as well as the sequences associated with a cancer mutation in a variety of human samples, such as saliva. Now, the team has improved the tool to be 100 times more sensitive and detect multiple viruses — such as Zika and dengue — in one sample simultaneously. It does this by combining different types of CRISPR enzymes, which are unleashed together to target distinct bits of DNA and RNA, another of the major biological molecules found in all forms of life. Some enzymes also work together to make the tool more sensitive.

If you read Doudna’s book, featured in the H+ “Key Books” section, you realise the enormous progress we made in the last 10 years in terms of DNA manipulation thanks to CRISPR, and yet you have a clear understanding that we are just scratching the surface of what is possible.

Sequence your genome for less than $1,000 and sell it via blockchain

From Human sequencing pioneer George Church wants to give you the power to sell your DNA on the blockchain | TechCrunch

MIT professor and godfather of the Human Genome Project George Church wants to put your genes on it.

His new startup Nebula Genomics plans to sequence your genome for less than $1,000 (the current going rate of whole genome sequencing) and then add your data to the blockchain through the purchase of a “Nebula Token.”

Church and his colleagues laid out in a recently released white paper that this will put the genomic power in the hands of the consumer, as opposed to companies like 23andMe and AncestryDNA, which own your genomic data after you take that spit tube test.

These companies sell that data in large swaths to pharmaceutical and research companies, often for millions of dollars. However, using the blockchain, consumers can choose to sell their own data directly.

and

Those buying up tokens and sequencing their DNA through Nebula don’t have to sell it for money, of course, and Nebula says they can still discover insights about their own genetics through the company app without sharing it elsewhere, if they desire.

However, all bought and sold data will be recorded on the blockchain, which is a technology allowing for the recording of all transactions using a key code known only to the person who holds the information.

Two thoughts:

  • If this idea generates even a tiny bit of money for each individual involved, it might unlock unprecedented access to genetic information for advanced engineering.
  • Our genome is the second last thing we’ve left to sell. The last one is our attention. But once they have our genome, our attention may come for free.

A biohacker injected himself with a DIY herpes treatment in front of a conference audience

From A biohacker injected himself with a DIY herpes treatment in front of a live audience – The Verge

Aaron Traywick, 28, who leads biotech firm Ascendance Biomedical, used an experimental herpes treatment that did not go through the typical route of clinical trials to test its safety.

Instead of being developed by research scientists in laboratories, it was created by a biohacker named Andreas Stuermer, who “holds a masters degree and is a bioentrepreneur and science lover,” according to a conference bio. This is typical of the Ascendance approach. The company believes that FDA regulations for developing treatments are too slow and that having biohackers do the research and experiment on themselves can speed up the process to everyone’s benefit. In the past, the company’s plans have included trying to reverse menopause, a method that is now actually in clinical trials.

From Biohackers Disregard FDA Warning on DIY Gene Therapy – MIT Technology Review

Experts say any gene therapy prepared by amateurs would probably not be potent enough to have much effect, but it could create risks such as an immune reaction to the foreign DNA. “I think warning people about this is the right thing,” says David Gortler, a drug safety expert with the consulting group Former FDA. “The bottom line is, this hasn’t been tested.”

The problem facing regulators is that interest in biohacking is spreading, and it’s increasingly easy for anyone to obtain DNA over the internet.

The last sentence is key. As in the tech industry, once you trigger bottom-up adoption the process is irreversible. And disruptive.

Police in China have begun using sunglasses equipped with facial recognition technology

From Chinese police spot suspects with surveillance sunglasses – BBC News

The glasses are connected to an internal database of suspects, meaning officers can quickly scan crowds while looking for fugitives.

The sunglasses have already helped police capture seven suspects, according to Chinese state media.

The seven people who were apprehended are accused of crimes ranging from hit-and-runs to human trafficking.

and

The technology allows police officers to take a photograph of a suspicious individual and then compare it to pictures stored in an internal database. If there is a match, information such as the person’s name and address will then be sent to the officer.

An estimated 170 million CCTV cameras are already in place and some 400 million new ones are expected be installed in the next three years.

Many of the cameras use artificial intelligence, including facial recognition technology.

In December 2017, I published Our Machines Can Very Easily Recognise You Among At Least 2 Billion People in a Matter of Seconds. It didn’t take long to go from press claims to real-world implementation.

Human augmentation 2.0 is already here, just not evenly distributed.

MIT launches Intelligence Quest, an initiative to discover the foundations of human intelligence

From Institute launches the MIT Intelligence Quest | MIT News

At a time of rapid advances in intelligence research across many disciplines, the Intelligence Quest — MIT IQ — will encourage researchers to investigate the societal implications of their work as they pursue hard problems lying beyond the current horizon of what is known.

Some of these advances may be foundational in nature, involving new insight into human intelligence, and new methods to allow machines to learn effectively. Others may be practical tools for use in a wide array of research endeavors, such as disease diagnosis, drug discovery, materials and manufacturing design, automated systems, synthetic biology, and finance.

and

MIT is poised to lead this work through two linked entities within MIT IQ. One of them, “The Core,” will advance the science and engineering of both human and machine intelligence. A key output of this work will be machine-learning algorithms. At the same time, MIT IQ seeks to advance our understanding of human intelligence by using insights from computer science.

The second entity, “The Bridge” will be dedicated to the application of MIT discoveries in natural and artificial intelligence to all disciplines, and it will host state-of-the-art tools from industry and research labs worldwide.

The Bridge will provide a variety of assets to the MIT community, including intelligence technologies, platforms, and infrastructure; education for students, faculty, and staff about AI tools; rich and unique data sets; technical support; and specialized hardware

and

In order to power MIT IQ and achieve results that are consistent with its ambitions, the Institute will raise financial support through corporate sponsorship and philanthropic giving.

MIT IQ will build on the model that was established with the MIT–IBM Watson AI Lab

What a phenomenal initiative. And MIT is one of the top places in the world to be for AI research.

Artificial General Intelligence might come out of this project.

Ultimately we want a (neuromorphic) chip as big as a fingernail to replace one big (AI) supercomputer

From Engineers design artificial synapse for “brain-on-a-chip” hardware | MIT News

Researchers in the emerging field of “neuromorphic computing” have attempted to design computer chips that work like the human brain. Instead of carrying out computations based on binary, on/off signaling, like digital chips do today, the elements of a “brain on a chip” would work in an analog fashion, exchanging a gradient of signals, or “weights,” much like neurons that activate in various ways depending on the type and number of ions that flow across a synapse.

In this way, small neuromorphic chips could, like the brain, efficiently process millions of streams of parallel computations that are currently only possible with large banks of supercomputers. But one significant hangup on the way to such portable artificial intelligence has been the neural synapse, which has been particularly tricky to reproduce in hardware.

Now engineers at MIT have designed an artificial synapse in such a way that they can precisely control the strength of an electric current flowing across it, similar to the way ions flow between neurons. The team has built a small chip with artificial synapses, made from silicon germanium. In simulations, the researchers found that the chip and its synapses could be used to recognize samples of handwriting, with 95 percent accuracy

and

Instead of using amorphous materials as an artificial synapse, Kim and his colleagues looked to single-crystalline silicon, a defect-free conducting material made from atoms arranged in a continuously ordered alignment. The team sought to create a precise, one-dimensional line defect, or dislocation, through the silicon, through which ions could predictably flow.

The researchers fabricated a neuromorphic chip consisting of artificial synapses made from silicon germanium, each synapse measuring about 25 nanometers across. They applied voltage to each synapse and found that all synapses exhibited more or less the same current, or flow of ions, with about a 4 percent variation between synapses — a much more uniform performance compared with synapses made from amorphous material.

They also tested a single synapse over multiple trials, applying the same voltage over 700 cycles, and found the synapse exhibited the same current, with just 1 percent variation from cycle to cycle.

Commercialization is very far away from this, but what we are talking here is building the foundation for artificial general intelligence (AGI), and before that, for narrow AI that can be embedded in clothes and everyday objects, not just in smartphones and other electronic devices.

Imagine the possibilities if an AI chip would be as cheap, small and ubiquitous as Bluetooth chips are today.

Undetectable mass social influence powered by artificial intelligence is an existential threat to humanity

Julian Assange on Twitter

The future of humanity is the struggle between humans that control machines and machines that control humans.
While the internet has brought about a revolution in our ability to educate each other, the consequent democratic explosion has shaken existing establishments to their core. Burgeoning digital super states such as Google, Facebook and their Chinese equivalents, who are integrated with the existing order, have moved to re-establish discourse control. This is not simply a corrective action. Undetectable mass social influence powered by artificial intelligence is an existential threat to humanity.
While still in its infancy, the geometric nature of this trend is clear. The phenomenon differs from traditional attempts to shape culture and politics by operating at a scale, speed, and increasingly at a subtlety, that appears highly likely to eclipse human counter-measures.
Nuclear war, climate change or global pandemics are existential threats that we can work through with discussion and thought. Discourse is humanity’s immune system for existential threats. Diseases that infect the immune system are usually fatal. In this case, at a planetary scale.

Self-doubting AI vs certain AI

From Google and Others Are Building AI Systems That Doubt Themselves – MIT Technology Review

Researchers at Uber and Google are working on modifications to the two most popular deep-learning frameworks that will enable them to handle probability. This will provide a way for the smartest AI programs to measure their confidence in a prediction or a decision—essentially, to know when they should doubt themselves.

and

“You would like a system that gives you a measure of how certain it is,” says Dustin Tran, who is working on this problem at Google. “If a self-driving car doesn’t know its level of uncertainty, it can make a fatal error, and that can be catastrophic.”

and

Goodman explains that giving deep learning the ability to handle probability can make it smarter in several ways. It could, for instance, help a program recognize things, with a reasonable degree of certainty, from just a few examples rather than many thousands. Offering a measure of certainty rather than a yes-or-no answer should also help with engineering complex systems.

Improving brain-computer interfaces by decrypting neural patterns

From Cracking the Brain’s Enigma Code – Scientific American

Many human movements, such as walking or reaching, follow predictable patterns, too. Limb position, speed and several other movement features tend to play out in an orderly way. With this regularity in mind, Eva Dyer, a neuroscientist at the Georgia Institute of Technology, decided to try a cryptography-inspired strategy for neural decoding.

Existing brain-computer interfaces typically use so-called ‘supervised decoders.’ These algorithms rely on detailed moment-by-moment movement information such as limb position and speed, which is collected simultaneously with recorded neural activity. Gathering these data can be a time-consuming, laborious process. This information is then used to train the decoder to translate neural patterns into their corresponding movements. (In cryptography terms, this would be like comparing a number of already decrypted messages to their encrypted versions to reverse-engineer the key.)

By contrast, Dyer’s team sought to predict movements using only the encrypted messages (the neural activity), and a general understanding of the patterns that pop up in certain movements.

and

Her team trained three macaque monkeys to either reach their arm or bend their wrist to guide a cursor to a number of targets arranged about a central point. At the same time, the researchers used implanted electrode arrays to record the activity of about 100 neurons in each monkey’s motor cortex, a key brain region that controls movement.

To find their decoding algorithm, the researchers performed an analysis on the neural activity to extract and pare down its core mathematical structure. Then they tested a slew of computational models to find the one that most closely aligned the neural patterns to the movement patterns.

and

Because Dyer’s decoder only required general statistics about movements, which tend to be similar across animals or across people, the researchers were also able to use movement patterns from one monkey to decipher reaches from the neural data of another monkey—something that is not feasible with traditional supervised decoders.

“There are people alive today who will live for 1,000 years”

From Aubrey de Grey: scientist who says humans can live for 1,000 years

Most approaches aimed at combating ageing focus on arresting the harmful byproducts of metabolism, he says. These cause cellular damage and decay, which, in turn, accumulate to trigger the age-related disorders, such as cancer or dementia, that tend to finish us off.

For de Grey, this strategy turns anti-ageing treatment into an impossible game of Whac-A-Mole. Because we understand metabolism so poorly, our efforts to interfere with it remain crude and the process of decay races through the body far quicker than treatments to avert it can keep up.

Instead of stopping the damage, the approach that de Grey has developed at his research centre — Strategies for Engineered Negligible Senescence (SENS), a public charity that he co-founded in 2009 — focuses on repair. This “engineering” approach is designed to keep the process of degradation below the threshold at which it turns into life-threatening disease. “If you can repair the microscopic damage then you are sidestepping the bigger problem [of prevention]”.

Assuming for a moment that some people alive today will be able to extend their lifespan to 200 years, or even 1,000 years, what would they do with such an enormity of time?

Today humans don’t really have a “life strategy”. They just live, allocating their lifetime to various activities according to what society has established. But what happens when your time extends well beyond the expectations of your society?

You may want to watch For de Grey’s TED Talk, too: A roadmap to end aging

Infusions of blood plasma from young donors to rejuvenate the body

From Exclusive: Inside the clinic offering young blood to cure ageing | New Scientist

So it’s a bit odd that this is the epicentre of a phenomenon rocking Silicon Valley: young blood treatments. JR is one of about 100 people who have each paid $8000 to join a controversial trial, offering them infusions of blood plasma from donors aged between 16 and 25 in a bid to turn back the clock. Participants have come from much further afield, including Russia and Australia.

and

in 2014, a team led by Tony Wyss-Coray, a neuroscientist at Stanford University, injected middle-aged mice with plasma from young mice. Sure enough, after three weeks they had anatomical improvements in the brain and a cognitive boost, compared with mice given a placebo.

The plasma didn’t even need to come from the same species – old mice became just as sprightly when the injection came from young humans. “We saw these astounding effects,” Wyss-Coray told New Scientist in 2014. “The human blood had beneficial effects on every organ we’ve studied so far.”

and

Ambrosia is a start-up headquartered in Washington DC. The trial didn’t need regulatory approval because plasma is already a standard treatment to replace missing proteins in people with rare genetic diseases. And there’s no placebo arm to it. All you need to join is a date of birth that makes you over 35 – and a spare $8000.

For your money, you are infused with 2 litres of plasma left over from young people who have donated to blood centres (see “Blood myths”). Unlike the trials looking at young blood’s effects on specific diseases, Ambrosia has a softer target: the general malaise of being old. In addition to measuring changes in about 100 biomarkers in blood, the firm is also “looking for general improvements”, says Jesse Karmazin, who runs the start-up.

The methodology falls short of the normal standards of scientific rigour, so it’s unsurprising that scientists and ethicists have accused Karmazin’s team of taking advantage of public excitement around the idea.

The numbers were as unverifiable as they were impressive: one month after treatment, 70 participants saw reductions in blood factors associated with risk of cancer, Alzheimer’s disease and heart disease, and reductions in cholesterol were on par with those from statin therapy.

and

Risks commonly associated with plasma transfusion include transfusion-related acute lung injury, which is fatal; transfusion-associated circulatory overload; and allergic reactions. Rare complications include catching an infectious disease: blood products carry a greater than 1 in a million chance of HIV transmission. That’s too risky for JR, who tells me that before every treatment he takes a dose of the HIV prophylactic PrEP.

and

There could be risks of developing autoimmune disorders. And some fear that pumping stimulating proteins into people for years could lead to cancer. “If you keep infusing blood, the risk of reactions goes up,” says Dobri Kiprov, an immunologist at California Pacific Medical Center in San Francisco. “Many of these people are just eager to get younger – they don’t have a particular disease, so it’s not justified.”

It sounds dangerous and unproven, but there are multiple high profile startups researching this road:

Google’s life-extension biotech arm Calico, among others, she developed an experiment in which a pump ferried half the blood from one individual into another.

and

anti-ageing start-up Unity Biotechnology, which is backed by Amazon founder Jeff Bezos’s investment company. They are developing a blood-exchange device, a kind of dialysis machine for old age, which cycles your blood through a filter that washes a laundry list of harmful compounds out of the plasma before returning it to you. This would carry no immune effects or disease risks, because it’s your own blood. No regulatory approval is needed, because dialysis filters that remove proteins from plasma are already in use, for example to remove cholesterol in people with certain hereditary diseases.

They are also developing sensors to notify you when levels of bad biomarkers are getting too high – a decrepitude meter to tell you when it’s time for a decrepitude wash.

You may want to watch Tony Wyss-Coray TED Talk, too: How young blood might help reverse aging. Yes, really

The US Air Force has a division dedicated to human performance enhancement

From 711th Human Performance Wing

The 711th Human Performance Wing (711 HPW), headquartered at Wright-Patterson Air Force Base in Ohio, is the first human-centric warfare wing to consolidate human performance research, education and consultation under a single organization. Established under the Air Force Research Laboratory, the 711 HPW is comprised of the Airman Systems Directorate (RH), the United States Air Force School of Aerospace Medicine (USAFSAM) and the Human Systems Integration Directorate (HP). The Wing delivers unparalleled capability to the Air Force through a combination of world class infrastructure and expertise of its diverse workforce of military, civilian and contractor personnel encompassing 75 occupational specialties including science and engineering, occupational health and safety, medical professions, technicians, educators, and business operations and support.

VISION
To be a world leader for human performance.

MISSION
To advance human performance in air, space, and cyberspace through research, education, and consultation. The Wing supports the most critical Air Force resource – the Airman of our operational military forces. The Wing’s primary focus areas are aerospace medicine, Human Effectiveness Science and Technology, and Human Systems Integration. In conjunction with the Naval Medical Research Unit – Dayton and surrounding universities and medical institutions, the 711 HPW functions as a Joint Department of Defense Center of Excellence for human performance sustainment and readiness, optimization, readiness.

Notice the inclusion of “cyberspace” among the environments where they try to advance human performance.

Smart diapers for the elderly – when smart monitoring is too much monitoring?

From Pixie Scientific announces availability for purchase in the UK of Pixie Pads, the first adult

Pixie Pads will help incontinent adults, including Alzheimer’s and other dementia sufferers, for whom behavioral symptoms of UTI are often confused with progression of dementia. Patients suffering the effects of stroke, spinal cord injury, or developmental disabilities, and men recovering from radical prostatectomy will also benefit from continuous monitoring enabled by Pixie Pads.

and

Disposable Pixie Pads contain an indicator panel that is scanned by a caregiver using the mobile Pixie App at changing time. The app stores urinalysis data in a secure online service for review and long-term monitoring. It issues an alert to a professional caregiver if there are signs of an infection that require further attention.

This was happening in mid 2016. One year later, Pixie Scientific got the FDA approval to sell in the US as well and started shipping the pads.

Notice that the company initially targeted a completely different market, newborn children, but I guess it wasn’t received too well. While monitoring the body can help diagnose and cure illnesses early on, it’s a big cultural shift from the state of “blindness” we are used to. Too much monitoring can create a state of anxiety and hyper-reaction to any exception to the baseline, not just legit symptoms.

CRISPR might be employed to destroy entire species

From A Crack in Creation:

Ironically, CRISPR might also enable the opposite: forcible extinction of unwanted animals or pathogens. Yes, someday soon, CRISPR might be employed to destroy entire species—an application I never could have imagined when my lab first entered the fledgling field of bacterial adaptive immune systems just ten years ago. Some of the efforts in these and other areas of the natural world have tremendous potential for improving human health and well-being. Others are frivolous, whimsical, or even downright dangerous. And I have become increasingly aware of the need to understand the risks of gene editing, especially in light of its accelerating use. CRISPR gives us the power to radically and irreversibly alter the biosphere that we inhabit by providing a way to rewrite the very molecules of life any way we wish. At the moment, I don’t think there is nearly enough discussion of the possibilities it presents—for good, but also for ill.

We have a responsibility to consider the ramifications in advance and to engage in a global, public, and inclusive conversation about how to best harness gene editing in the natural world, before it’s too late.

and

If the first of these gene drives (for pigmentation) seems benign and the second (for malaria resistance) seems beneficial, consider a third example. Working independently of the California scientists, a British team of researchers—among them Austin Bud, the biologist who pioneered the gene drive concept—created highly transmissive CRISPR gene drives that spread genes for female sterility. Since the sterility trait was recessive, the genes would rapidly spread through the population, increasing in frequency until enough females acquired two copies, at which point the population would suddenly crash. Instead of eradicating malaria by genetically altering mosquitoes to prevent them from carrying the disease, this strategy presented a blunter instrument—one that would cull entire populations by hindering reproduction. If sustained in wild-mosquito populations, it could eventually lead to outright extermination of an entire mosquito species.

and

It’s been estimated that, had a fruit fly escaped the San Diego lab during the first gene drive experiments, it would have spread genes encoding CRISPR, along with yellow-body trait, to between 20 and 50 percent of all fruit flies worldwide.

The author of this book, Jennifer Doudna, is one of the first scientists that discovered the groundbreaking gene editing technique CRISPR-Cas9. The book is a fascinating narration of how CRISPR came to be, and it’s listed in the Key Books section of H+.

The book was finished in September 2016 (and published in June 2017), so the warning is quite recent.

You may also want to watch Doudna’s TED Talk about the bioethics of CRISPR: How CRISPR lets us edit our DNA.

Using Artificial Intelligence to augment human intelligence

From Using Artificial Intelligence to Augment Human Intelligence

in one common view of AI our computers will continue to get better at solving problems, but human beings will remain largely unchanged. In a second common view, human beings will be modified at the hardware level, perhaps directly through neural interfaces, or indirectly through whole brain emulation.

We’ve described a third view, in which AIs actually change humanity, helping us invent new cognitive technologies, which expand the range of human thought. Perhaps one day those cognitive technologies will, in turn, speed up the development of AI, in a virtuous feedback cycle

and

The interface-oriented work we’ve discussed is outside the narrative used to judge most existing work in artificial intelligence. It doesn’t involve beating some benchmark for a classification or regression problem. It doesn’t involve impressive feats like beating human champions at games such as Go. Rather, it involves a much more subjective and difficult-to-measure criterion: is it helping humans think and create in new ways?

This creates difficulties for doing this kind of work, particularly in a research setting. Where should one publish? What community does one belong to? What standards should be applied to judge such work? What distinguishes good work from bad?

A truly remarkable idea that would be infinitely more powerful if not buried under a wall of complexity, making it out of reach for very many readers.

This could be a seminal paper.

UK company pioneers tissue engineering with 3D bioprinters 

From Applications | 3Dynamic Systems Ltd

3Dynamic Systems is currently developing a range of 3D bioprinted vascular scaffold as part of its new product line. We have been developing 3D bioprinting as a research tool since 2012 and have now pushed forward with the commercialisation of the first 3D tissue structures. Called the vascular scaffold, it is the first commercial tissue product to be developed by us. 3DS research has accelerated recently and work is now focussing on the fabrication of heterogeneous tissues for use in surgery.

Currently we manufacture 20mm length sections of bioprinted vessels, which if successful will lead to larger and more complex vessels to be bioprinted in 3D. Our research concentrates on using the natural self-organising properties of cells in order to produce functional tissues.

At 3DS, we have a long-term goal that this technology will one day be suitable for surgical therapy and transplantation. Blood vessels are made up of different cell types and our new Omega allows for many types of cells to be deposited in 3D. Biopsied tissue materials is gathered from a host, with stem cells isolated and multiplied. These cells are cultured and placed in a bioreactor, which provides oxygen and other nutrients to keep them alive. The millions of cells that are produced are then added to our bioink and bioprinted into the correct 3D geometry.

Over the next two years we will begin the long road towards the commercialisation of our 3D bioprinted vessels. Further development of their technology will harness tissues for operative repair and in the short-term tissues for pharmaceutical trials. This next step in the development of this process could one day transform the field of reconstructive medicine which may lead to direct bioengineering replacement human tissues on-demand for transplantation.

The next opportunity for our research is in developing organ on a chip technology to test drugs and treatments. So far we have initial data based on our vascular structures. In the future this method may be used to analyse any side-effects of new pharmaceutical products.

3Dynamic Systems building 3D bioprinters that automatically produce 3D tissue structures. The company also build perfusion bioreactors that test tissue structures over periods of months for the effects of stimulation and the test the influence of drugs on 3D cell behaviour.

Normally, I don’t quote the website of companies working in the field of research and commercial application covered by H+. But these guys followed @hplus on Twitter without asking for any coverage and have a crystal clear website. I wish more companies were like this.

Ford testing exoskeletons for workers in final assembly plant

From Are exoskeletons the future of physical labor? – The Verge

The vest that Paul Collins has been wearing at Ford is made by Ekso Bionics, a Richmond, California-based company. It’s an electronic-free contraption, and the soft part that hugs his chest looks like the front of a backpack. But the back of it has a metal rod for a spine, and a small, curved pillow rests behind his neck. Extending from the spine are spring-loaded arm mechanics, ones that help Collins lift his arms to install carbon cans on Ford C-Max cars, and rubber grommets on Ford Focuses — about 70 cars an hour.

and

since 2011, Ford has been working, in some capacity, on wearable robotics solutions. But rather than trying to develop something that would give workers superhuman strength, the idea is to prevent injury. “In 2016, our injury statistics were the lowest we’ve seen on record. We’ve had an 83 percent decrease in some of these metrics over the past five years, which is all great,” Smets said. “But if you look at the body parts that are still getting injured, it’s predominantly the shoulder. That’s our number one joint for injury. It’s also the longest to return to full functionality, and the most costly.”

The Ekso vest I tried costs around $6,500 and weighs nine pounds. Smets handed me a power tool, flipped a physical switch on the arm of the vest, and told me to raise my arms over my head as though I was on an assembly line. At some point during my movement, the exosuit kicked into action, its spring mechanism lifting my arms the rest of the way. I could leave my arms in place above my head, too, fully supported. My fingers started to tingle after awhile in that position.

Watch the video.

Tomorrow’s replacement skin could be 3D printed from a new ink embedded with living bacteria

From This 3D-printed ‘living ink’ could someday help with skin replacements – The Verge

Bacteria are able to do everything from breaking down toxins to synthesizing vitamins. When they move, they create strands of a material called cellulose that is useful for wound patches and other medical applications. Until now, bacterial cellulose could only be grown on a flat surface — and few parts of our body are perfectly flat. In a paper published today in Science Advances, researchers created a special ink that contains these living bacteria. Because it is an ink, it can be used to 3D print in shapes — including a T-shirt, a face, and circles — and not just flat sheets.

Bacterial cellulose is free of debris, holds a lot of water, and has a soothing effect once it’s applied on wounds. Because it’s a natural material, our body is unlikely to reject it, so it has many potential applications for creating skin transplants, biosensors, or tissue envelopes to carry and protect organs before transplanting them.

The amount of research on skin synthesis and augmentation is surprising. H+ is capturing a lot of articles about it.

“We have entered the age where the human genome is a real drug target” – CRISPR stopped mice from going deaf

From This gene therapy stopped mice from going deaf — and could save some humans’ hearing too – The Verge

Although people can lose their hearing for a variety of reasons — old age, as well as exposure to loud noises — genetics are behind a little less than half of all deafness cases, says study co-author David Liu, a professor of chemistry and chemical biology at Harvard, who also has affiliations with the Broad Institute and the Howard Hughes Medical Institute. The hearing-loss disease tackled in this study is caused by mutations in a gene called TMC1. These mutations cause the death of so-called hair cells in the inner ear, which convert mechanical vibrations like sound waves into nerve signals that the brain interprets as hearing. As a result, people start losing their hearing in their childhood or in the 20s, and can go completely deaf by their 50s and 60s.

To snip those mutant copies of the gene, Liu and his colleagues mixed CRISPR-Cas9 with a lipid droplet that allows the gene-editing tool to enter the hair cells and get to work. When the concoction was injected into one ear of newborn mice with the disease, the molecular scissors were able to precisely cut the deafness-causing copy of the gene while leaving the healthy copy alone, even if the two copies differ by just one base pair. The treatment allowed the hair cells to stay healthier and prevented the mice from going deaf.

After four weeks, the untreated ears could only pick up noises that were 80 decibels or louder, roughly as loud as a garbage disposal, Liu says. Instead, the injected ears could typically hear sounds in the 60 to 65 decibel range, which is the same as a quiet conversation. “If one can translate that 15 decibel improvement in hearing sensitivity in humans, it would actually make a potential difference in the quality of their hearing capability,” Liu tells The Verge.

Limb reanimation through neuroscience and machine learning

From First paralysed person to be ‘reanimated’ offers neuroscience insights : Nature

A quadriplegic man who has become the first person to be implanted with technology that sends signals from the brain to muscles — allowing him to regain some movement in his right arm hand and wrist — is providing novel insights about how the brain reacts to injury.

Two years ago, 24-year-old Ian Burkhart from Dublin, Ohio, had a microchip implanted in his brain, which facilitates the ‘reanimation’ of his right hand, wrist and fingers when he is wired up to equipment in the laboratory.

and

Bouton and his colleagues took fMRI (functional magnetic resonance imaging) scans of Burkhart’s brain while he tried to mirror videos of hand movements. This identified a precise area of the motor cortex — the area of the brain that controls movement — linked to these movements. Surgery was then performed to implant a flexible chip that detects the pattern of electrical activity arising when Burkhart thinks about moving his hand, and relays it through a cable to a computer. Machine-learning algorithms then translate the signal into electrical messages, which are transmitted to a flexible sleeve that wraps around Burkhart’s right forearm and stimulates his muscles.

Burkhart is currently able to make isolated finger movements and perform six different wrist and hand motions, enabling him to, among other things, pick up a glass of water, and even play a guitar-based video game.

This story is one year and a half old, but I just found out about it and I think it’s a critical piece of the big picture that H+ is trying to narrate.

A growing number of artificial intelligence researchers focus on algorithmic bias

Kate Crawford, Distinguished Research Professor at New York University, a Principal Researcher at Microsoft Research New York, and a Visiting Professor at the MIT Media Lab, presented The Trouble with Bias at the NIPS 2017, the most influential and attended (over 8,000 people) conference on artificial intelligence.

Prof. Crawford is not the only one looking into algorithmic bias. As she shows in her presentation, a growing number of research papers focus on it, and even government agencies have started questioning how AI decisions are made.

Why do I talk about algorithmic bias so frequently on H+? Because in a future were AI augments human brain capabilities, through neural interfaces or other means, the algorithmic bias would manipulate people’s worldview in ways that mass media and politics can’t even dream about.

Before we merge human biology with technology we need to ask really difficult questions about how technology operates outside the body.

A task force to review New York City agencies’ use of algorithms and their bias

From New York City Takes on Algorithmic Discrimination | American Civil Liberties Union

The New York City Council yesterday passed legislation that we are hopeful will move us toward addressing these problems. New York City already uses algorithms to help with a broad range of tasks: deciding who stays in and who gets out of jail, teacher evaluations, firefighting, identifying serious pregnancy complications, and much more. The NYPD also previously used an algorithm-fueled software program developed by Palantir Technologies that takes arrest records, license-plate scans, and other data, and then graphs that data to supposedly help reveal connections between people and even crimes. The department since developed its own software to perform a similar task.

The bill, which is expected to be signed by Mayor Bill de Blasio, will provide a greater understanding of how the city’s agencies use algorithms to deliver services while increasing transparency around them. This bill is the first in the nation to acknowledge the need for transparency when governments use algorithms and to consider how to assess whether their use results in biased outcomes and how negative impacts can be remedied.

The legislation will create a task force to review New York City agencies’ use of algorithms and the policy issues they implicate. The task force will be made up of experts on transparency, fairness, and staff from non-profits that work with people most likely to be harmed by flawed algorithms. It will develop a set of recommendations addressing when and how algorithms should be made public, how to assess whether they are biased, and the impact of such bias.

Timely, as more and more AI researchers look into algorithmic bias.

Importance of Artificial Intelligence to Department of Defense

From Perspectives on Research in Artificial Intelligence and Artificial General Intelligence Relevant to DoD:

That AI and—if it were to advance significantly—AGI are of importance to DoD is so self-evident that it needs little elucidation here. Weapons systems and platforms with varying degrees of autonomy exist today in all domains of modern warfare, including air, sea (surface and underwater), and ground.

To cite a few out of many possible examples: Northrop Grumman’s X-47B is a strike fighter-sized unmanned aircraft, part of the U.S. Navy’s Unmanned Combat Air System (UCAS) Carrier Demonstration program. Currently undergoing flight testing, it is capable of aircraft carrier launch and recovery, as well as autonomous aerial refueling.4 DARPA’s Anti-Submarine Warfare Continuous Trail Unmanned Vessel (ACTUV) program recently commissioned the “Sea Hunter”, a 130 ft. unmanned trimaran optimized to robustly track quiet diesel electric submarines.
The Samsung SGR-A1 is a South Korean military robot sentry designed to replace human counterparts in the Korean demilitarized zone.
It is capable of challenging humans for a spoken password and, if it does not recognize the correct password in response, shooting them with either rubber bullets or lethal ammunition.

It is an important point that, while these systems have some degree of “autonomy” relying on the technologies of AI, they are in no sense a step—not even a small step—towards “autonomy” in the sense of AGI, that is, the ability to set independent goals or intent. Indeed, the word “autonomy” conflates two quite different meanings, one relating to “freedom of will or action” (like humans, or as in AGI), and the other the much more prosaic ability to act in accordance with a possibly complex rule set based on possibly complex sensor input, as in the word “automatic”. In using a terminology like “autonomous weapons”, the DoD may, as an unintended consequence, enhance the public’s confusion on this point.

and

At a higher strategic level, AI is recognized by DoD as a key enabling technology in a possible Third Offset Strategy.

As briefed to JASON, key elements of a Third Offset Strategy include:
(i) autonomous learning systems, e.g., in applications that require faster-than-human reaction times; (ii) human-machine collaborative decision making; (iii) assisted human operations, especially in combat; (iv) advanced strategies for collaboration between manned and unmanned platforms; and (v) network-enabled, autonomous weapons capable of operating in future cyber and electronic warfare environments. AI, as it is currently understood as a field of “6.1” basic research, will supply enabling technologies for all of these elements. At the same time, none of these elements are dependent on future advances in AGI.

JASON is an independent scientific advisory group that provides consulting services to the U.S. government on matters of defense science and technology. It was established in 1960.

JASON typically performs most of its work during an annual summer study, and has conducted studies under contract to the Department of Defense (frequently DARPA and the U.S. Navy), the Department of Energy, the U.S. Intelligence Community, and the FBI. Approximately half of the resulting JASON reports are unclassified.

DARPA has become the world’s largest funder of “gene drive” research

From US military agency invests $100m in genetic extinction technologies | Science | The Guardian

A US military agency is investing $100m in genetic extinction technologies that could wipe out malarial mosquitoes, invasive rodents or other species, emails released under freedom of information rules show.

The UN Convention on Biological Diversity (CBD) is debating whether to impose a moratorium on the gene research next year and several southern countries fear a possible military application.

and

Gene-drive research has been pioneered by an Imperial College London professor, Andrea Crisanti, who confirmed he has been hired by Darpa on a $2.5m contract to identify and disable such drives.

Human augmentation has, at least at the beginning, a very limited number of very specific use cases. The supersoldier certainly is the top one.

Defeating cancer costs $500,000 

From Genetic Programmers Are the Next Startup Millionaires – MIT Technology Review

Cell Design Labs, founded by University of California, San Francisco, synthetic biologist Wendell Lim, creates “programs” to install inside T cells, the killer cells of the immune system, giving them new abilities.

Known as “CAR-T,” the treatments are both revolutionary and hugely expensive. A single dose is priced at around $500,000 but often results in a cure. Gilead quickly paid $12 billion to acquire Kite Pharma, maker of one of those treatments.

The initial T cell treatments, however, work only with blood cancers.

From FDA Approves Groundbreaking Gene Therapy for Cancer – MIT Technology Review

The FDA calls the treatment, made by Novartis, the “first gene therapy” in the U.S. The therapy is designed to treat an often-lethal type of blood and bone marrow cancer that affects children and young adults. Known as a CAR-T therapy, the approach has shown remarkable results in patients. The one-time treatment will cost $475,000, but Novartis says there will be no charge if a patient doesn’t respond to the therapy within a month.

The therapy, which will be marketed as Kymriah, is a customized treatment that uses a patient’s own T cells, a type of immune cell. A patient’s T cells are extracted and cryogenically frozen so that they can be transported to Novartis’s manufacturing center in New Jersey. There, the cells are genetically altered to have a new gene that codes for a protein—called a chimeric antigen receptor, or CAR. This protein directs the T cells to target and kill leukemia cells with a specific antigen on their surface. The genetically modified cells are then infused back into the patient.

This is less than the $700,000 previously reported, but still a fortune.

In Vivo Target Gene Activation via CRISPR/Cas9-Mediated Trans-epigenetic Modulation

From In Vivo Target Gene Activation via CRISPR/Cas9-Mediated Trans-epigenetic Modulation: Cell

Current genome-editing systems generally rely on inducing DNA double-strand breaks (DSBs). This may limit their utility in clinical therapies, as unwanted mutations caused by DSBs can have deleterious effects. CRISPR/Cas9 system has recently been repurposed to enable target gene activation, allowing regulation of endogenous gene expression without creating DSBs. However, in vivo implementation of this gain-of-function system has proven difficult. Here, we report a robust system for in vivo activation of endogenous target genes through trans-epigenetic remodeling. The system relies on recruitment of Cas9 and transcriptional activation complexes to target loci by modified single guide RNAs. As proof-of-concept, we used this technology to treat mouse models of diabetes, muscular dystrophy, and acute kidney disease. Results demonstrate that CRISPR/Cas9-mediated target gene activation can be achieved in vivo, leading to measurable phenotypes and amelioration of disease symptoms. This establishes new avenues for developing targeted epigenetic therapies against human diseases.

CRISPR can be repurposed to enable target gene activation

From Adapted Crispr gene editing tool could treat incurable diseases, say scientists | The Guardian

The technique is an adapted version of the powerful gene editing tool called Crispr. While the original version of Crispr snips DNA in precise locations to delete faulty genes or over-write flaws in the genetic code, the modified form “turns up the volume” on selected genes.

and

In the new version a Crispr-style guide is still used, but instead of cutting the genome at the site of interest, the Cas9 enzyme latches onto it. The new package also includes a third element: a molecule that homes in on the Cas9 and switches on whatever gene it is attached to.

and

The team showed that mice, with a version of muscular dystophy, a fatal muscle wasting disorder, recovered muscle growth and strength. The illness is caused by a mutation in the gene that produces dystrophin, a protein found in muscle fibres. However, rather than trying to replace this gene with a healthy version, the team boosted the activity of a second gene that produces a protein called utrophin that is very similar to dystrophin and can compensate for its absence.

Of course, once you can activate genes at will, you can also boost a perfectly healthy human in areas where he/she is weak or inept.

Genetic engineering for skill enablement, that is.

Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings

From [1607.06520] Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings

The blind application of machine learning runs the risk of amplifying biases present in data. Such a danger is facing us with word embedding, a popular framework to represent text data as vectors which has been used in many machine learning and natural language processing tasks. We show that even word embeddings trained on Google News articles exhibit female/male gender stereotypes to a disturbing extent. This raises concerns because their widespread use, as we describe, often tends to amplify these biases. Geometrically, gender bias is first shown to be captured by a direction in the word embedding. Second, gender neutral words are shown to be linearly separable from gender definition words in the word embedding. Using these properties, we provide a methodology for modifying an embedding to remove gender stereotypes, such as the association between between the words receptionist and female, while maintaining desired associations such as between the words queen and female. We define metrics to quantify both direct and indirect gender biases in embeddings, and develop algorithms to “debias” the embedding. Using crowd-worker evaluation as well as standard benchmarks, we empirically demonstrate that our algorithms significantly reduce gender bias in embeddings while preserving the its useful properties such as the ability to cluster related concepts and to solve analogy tasks. The resulting embeddings can be used in applications without amplifying gender bias.

Our machines can very easily recognise you among at least 2 billion people in a matter of seconds

From Doctor, border guard, policeman – artificial intelligence in China and its mind-boggling potential to do right, or wrong | South China Morning Post

Yitu’s Dragonfly Eye generic portrait platform already has 1.8 billion photographs to work with: those logged in the national database and you, if you have visited China recently. Yitu will not say whether Hong Kong identity card holders have been logged in the government’s database, for which the company provides navigation software and algor­ithms, but 320 million of the photos have come from China’s borders, including ports and airports, where pictures are taken of everyone who enters and leaves the country.

According to Yitu, its platform is also in service with more than 20 provincial public security departments, and is used as part of more than 150 municipal public security systems across the country, and Dragonfly Eye has already proved its worth. On its very first day of operation on the Shanghai Metro, in January, the system identified a wanted man when he entered a station. After matching his face against the database, Dragonfly Eye sent his photo to a policeman, who made an arrest. In the following three months, 567 suspected lawbreakers were caught on the city’s underground network.

Imagine this performed by a human eye augmented by AR lenses or glasses.

If you think that humans will confine this sort of applications to a computer at your desk or inside your pocket, you are delusional.

% Chinese researchers contribution to best 100 AI journals/conferences

The Eurasia Group and Sinovation Ventures released a report titled China embraces AI: A Close Look and A Long View with some interesting data.

The first bit is a chart that shows how the percentage of Chinese researchers contribution to best 100 AI journals/conferences raised from 23% / 25% (authoring/citations) in 2006 to almost 43% / 56% (authoring/citations) in 2015.

The second bit is a list of Chinese AI startups, divided into research/enabling technology/commercial application categories, which also highlights domestic and foreign investors.

With the massive commitment of the Chinese government, these numbers are bound to grow significantly.

Google open source tool DeepVariant achieves unprecedented accuracy in human genome sequencing

From Google Is Giving Away AI That Can Build Your Genome Sequence | Wired:

On Monday, Google released a tool called DeepVariant that uses deep learning—the machine learning technique that now dominates AI—to assemble full human genomes.

And now, engineers at Google Brain and Verily (Alphabet’s life sciences spin-off) have taught one to take raw sequencing data and line up the billions of As, Ts, Cs, and Gs that make you you.

and

Today, you can get your whole genome for just $1,000 (quite a steal compared to the $1.5 million it cost to sequence James Watson’s in 2008).

But the data produced by today’s machines still only produce incomplete, patchy, and glitch-riddled genomes. Errors can get introduced at each step of the process, and that makes it difficult for scientists to distinguish the natural mutations that make you you from random artifacts, especially in repetitive sections of a genome.

See, most modern sequencing technologies work by taking a sample of your DNA, chopping it up into millions of short snippets, and then using fluorescently-tagged nucleotides to produce reads—the list of As, Ts, Cs, and Gs that correspond to each snippet. Then those millions of reads have to be grouped into abutting sequences and aligned with a reference genome.

That’s the part that gives scientists so much trouble. Assembling those fragments into a usable approximation of the actual genome is still one of the biggest rate-limiting steps for genetics.

and

DeepVariant works by transforming the task of variant calling—figuring out which base pairs actually belong to you and not to an error or other processing artifact—into an image classification problem. It takes layers of data and turns them into channels, like the colors on your television set.

After the FDA contest they transitioned the model to TensorFlow, Google’s artificial intelligence engine, and continued tweaking its parameters by changing the three compressed data channels into seven raw data channels. That allowed them to reduce the error rate by a further 50 percent. In an independent analysis conducted this week by genomics computing platform, DNAnexus, DeepVariant vastly outperformed GATK, Freebayes, and Samtools, sometimes reducing errors by as much as 10-fold.

DeepVariant is now open source and available here: https://github.com/google/deepvariant

Google competes with many other vendors on many fronts. But while his competitors are focused on battling for today’s market opportunities, Google is busy in a solitary race to control the battlefield of the future: the human body.

The human body is the ultimate data center.

I always wondered how it would be if a superior species landed on earth and showed us how they play chess

From Google’s AlphaZero Destroys Stockfish In 100-Game Match – Chess.com

Stockfish, which for most top players is their go-to preparation tool, and which won the 2016 TCEC Championship and the 2017 Chess.com Computer Chess Championship, didn’t stand a chance. AlphaZero won the closed-door, 100-game match with 28 wins, 72 draws, and zero losses.

Oh, and it took AlphaZero only four hours to “learn” chess.

and

“We have always assumed that chess required too much empirical knowledge for a machine to play so well from scratch, with no human knowledge added at all,” Kasparov said. “Of course I’ll be fascinated to see what we can learn about chess from AlphaZero, since that is the great promise of machine learning in general—machines figuring out rules that humans cannot detect. But obviously the implications are wonderful far beyond chess and other games. The ability of a machine to replicate and surpass centuries of human knowledge in complex closed systems is a world-changing tool.”

The progress that DeepMind, and the industry in general, is making in artificial intelligence is breathtaking. Eventually, this feeling of confronting a superior species will become more and more frequent.

The notion of being, for the first time ever, the inferior species is terrifying for most humans. It implies that somebody else can do to us what we do to animals on daily basis. Homo Deus, Yuval Noah Harari new bestseller, drives you to that realization in an amazing way. I can’t recommend it enough.

Google AutoML generates its first outperforming AI child

From Google’s Artificial Intelligence Built an AI That Outperforms Any Made by Humans

In May 2017, researchers at Google Brain announced the creation of AutoML, an artificial intelligence (AI) that’s capable of generating its own AIs. More recently, they decided to present AutoML with its biggest challenge to date, and the AI that can build AI created a “child” that outperformed all of its human-made counterparts.

AutoML acts as a controller neural network that develops a child AI network for a specific task. For this particular child AI, which the researchers called NASNet, the task was recognizing objects — people, cars, traffic lights, handbags, backpacks, etc. — in a video in real-time.

and

NASNet was 82.7 percent accurate at predicting images on ImageNet’s validation set. This is 1.2 percent better than any previously published results, and the system is also 4 percent more efficient, with a 43.1 percent mean Average Precision (mAP)

and

The Google researchers acknowledge that NASNet could prove useful for a wide range of applications and have open-sourced the AI for inference on image classification and object detection. “We hope that the larger machine learning community will be able to build on these models to address multitudes of computer vision problems we have not yet imagined,” they wrote in their blog post.

Though the applications for NASNet and AutoML are plentiful, the creation of an AI that can build AI does raise some concerns. For instance, what’s to prevent the parent from passing down unwanted biases to its child? What if AutoML creates systems so fast that society can’t keep up?

We are waiting to develop a human-level artificial intelligence and see if it will improve itself to the point of becoming a superintelligence. Maybe it’s exceptionally close.