What if we could cure people of inherited diseases before they were born?

From Pioneering Stem Cell Trial Seeks to Cure Babies Before Birth

Elianna has a rare inherited blood disorder called alpha thalassemia major, which prevents her red blood cells from forming properly. The disease, which has no cure, is usually fatal for a developing fetus.

But while still in her mother’s womb, Elianna received a highly daring treatment. Doctors isolated healthy blood stem cells from her mother and injected them through a blood vessel that runs down the umbilical cord. Four months later, Elianna was born with a loud cry and a glistening head of hair, defying all medical odds.

Elianna is the first in a pioneering clinical trial that pushes the boundaries of stem cell transplants.

and

The idea that you can treat a fetus while inside a mother’s womb is pretty radical. Doctors have long thought that fetuses are encased in an impermeable protected barrier, which helps protect the developing human from outside insults.

Early experiments with fetal stem cell transplants seemed to support the dogma. Most trials using the father’s stem cells failed, leading doctors to believe that the procedure couldn’t be done.

But subsequent research in animals discovered a crucial tidbit of information: the mother’s immune system, not the fetus, was rejecting the father’s stem cells.

There’s more: rather than being quarantined, fetuses continuously exchange cells with their mothers, so much so that fetal cells can actually be isolated from a mother’s bloodstream.

The reason for this is to quiet both parties’ immune systems. Because the fetus has part of the father’s DNA, it makes a portion of their cells foreign to the mother. This back-and-forth trafficking of cells “teaches” both the mom’s and the fetal immune system to calm down: even though the cells aren’t a complete genetic match, the fetuses’ cells will tolerate their mother’s cells, and vice-versa. In this way, during pregnancy the fetal immune system is on hold against the mother.

This harmonious truce changes once the baby is born. The child’s immune system grinds into action, attacking any cells that are foreign to its body. Once born, a bone marrow transplant requires drugs to kill off the infant’s own bone marrow cells and make room for healthy ones. It also requires high doses of immunosuppressant drugs to keep the infant’s immune system at bay while the new, healthy cells do their job.

Enabling accessible and scalable eye tracking research

From Pupil Labs

We develop open source software and build accessible hardware for eye tracking.
Our mobile eye tracking headsets have research grade specs and enable accessible and scalable eye tracking research.

Supercharge your VR and AR eye tracking research and development with our eye tracking add-ons for the latest consumer hardware.

Take full control of the software and modify to your needs – it’s 100% open source!

Remarkable hardware design supported by an open source software platform, with an impressive list of customers.

A friend of mine mentioned them to me. Can’t wait to try it.

There is an increasing ‘lifestyle use’ of cognitive-enhancing drugs by healthy people

From Use of ‘smart drugs’ on the rise

The use of drugs by people hoping to boost mental performance is rising worldwide, finds the largest ever study of the trend. In a survey of tens of thousands of people, 14% reported using stimulants at least once in the preceding 12 months in 2017, up from 5% in 2015.

The non-medical use of substances—often dubbed smart drugs—to increase memory or concentration is known as pharmacological cognitive enhancement (PCE), and it rose in all 15 nations included in the survey. The study looked at prescription medications such as Adderall and Ritalin—prescribed medically to treat attention deficit hyperactivity disorder (ADHD)—as well as the sleep-disorder medication modafinil and illegal stimulants such as cocaine.

US respondents reported the highest rate of use: in 2017, nearly 30% said they had used drugs for PCE at least once in the preceding 12 months, up from 20% in 2015.

But the largest increases were in Europe: use in France rose from 3% in 2015 to 16% in 2017; and from 5% to 23% in the United Kingdom

The ethical concerns mentioned (and linked) in the article were discussed in 2008. Ten years later, the consumption of smart drugs is exploding.

Clearly, people are way more concerned about information processing than ethical issues or side effects.

Someday, maybe, we could regrow limbs

From Axolotl Genome Slowly Yields Secrets of Limb Regrowth | Quanta Magazine

Salamanders are champions at regenerating lost body parts. A flatworm called a planarian can grow back its entire body from a speck of tissue, but it is a very small, simple creature. Zebra fish can regrow their tails throughout their lives. Humans, along with other mammals, can regenerate lost limb buds as embryos. As young children, we can regrow our fingertips; mice can still do this as adults. But salamanders stand out as the only vertebrates that can replace complex body parts that are lost at any age, which is why researchers seeking answers about regeneration have so often turned to them.

While researchers studying animals like mice and flies progressed into the genomic age, however, those working on axolotls were left behind. One obstacle was that axolotls live longer and mature more slowly than most lab animals, which makes them cumbersome subjects for genetics experiments. Worse, the axolotl’s enormous and repetitive genome stubbornly resisted sequencing.

Then a European research team overcame the hurdles and finally published a full genetic sequence for the laboratory axolotl earlier this year. That accomplishment could change everything.

“The genome was a huge problem that had been lingering over the heads of everyone working in axolotl,” said Jessica Whited, the assistant professor and researcher who supervises this laboratory at Harvard Medical School and Brigham and Women’s Hospital. Now that she and other researchers have the whole axolotl genome, they’re hoping to unlock secrets of regeneration and perhaps even to learn how humans could harness this power for ourselves

and

After an amputation, a salamander bleeds very little and seals off the wound within hours. Cells then migrate to the wound site and form a blob called a blastema. Most of these recruits seem to be cells from nearby that have turned back their own internal clocks to an unspecialized or “dedifferentiated” state more like that seen in embryos. But it’s unclear whether and to what extent the animal also calls on reserves of stem cells, the class of undifferentiated cells that organisms maintain to help with healing. Whatever their origin, the blastema cells redifferentiate into new bone, muscle and other tissues. A perfect new limb forms in miniature, then enlarges to the exact right size for its owner.

Scientists don’t know whether axolotls use the same mechanisms to regenerate their internal organs as their limbs. They also don’t know why an axolotl can grow back an arm many times in a row but not indefinitely — after being amputated five times, most axolotl limbs stop coming back. Another mystery is how a limb knows to stop growing when it reaches the right size.

and

Monaghan is studying axolotl retinas to try to improve the outcomes of prospective stem cell therapies in aging human eyes. He also thinks finding out how axolotls rapidly regrow their lungs could help us learn to heal human lungs, which naturally have some regenerative power.

McCusker has studied how the tissue environment of a salamander’s regenerating limb controls the behavior of cells. Someday, we might be able to regulate the environment around a cancer cell and force it to behave normally.

National AI Strategies Around the World

From An Overview of National AI Strategies – Politics + AI – Medium

In the past fifteen months, Canada, Japan, Singapore, China, the UAE, Finland, Denmark, France, the UK, the EU Commission, South Korea, and India have all released strategies to promote the use and development of AI. No two strategies are alike, with each focusing on different aspects of AI policy: scientific research, talent development, skills and education, public and private sector adoption, ethics and inclusion, standards and regulations, and data and digital infrastructure.

This article summarizes the key policies and goals of each national strategy. It also highlights relevant policies and initiatives that the countries have announced since the release of their initial strategies.

These strategies are the foundation for future policies about human enhancement technology adoption. Once a nation sees the benefits of artificial intelligence, it will be hard to limit its adoption to just things.

What happens if humans get external or prosthetic nerves?

From Juan Enriquez: What will humans look like in 100 years? | TED Talk

And four of the smartest people that I’ve ever met — Ed Boyden, Hugh Herr, Joe Jacobson, Bob Lander — are working on a Center for Extreme Bionics. And the interesting thing of what you’re seeing here is these prosthetics now get integrated into the bone. They get integrated into the skin. They get integrated into the muscle. And one of the other sides of Ed is he’s been thinking about how to connect the brain using light or other mechanisms directly to things like these prosthetics. And if you can do that, then you can begin changing fundamental aspects of humanity. So how quickly you react to something depends on the diameter of a nerve. And of course, if you have nerves that are external or prosthetic, say with light or liquid metal, then you can increase that diameter and you could even increase it theoretically to the point where, as long as you could see the muzzle flash, you could step out of the way of a bullet. Those are the order of magnitude of changes you’re talking about.

The TED Talk only briefly mentions this aspect, but it’s worth watching to have an idea of the most prominent scientists working on human body augmentation technologies today.

3D-printed organs within the next five years

From Implantable 3D-printed organs could be coming sooner than you think | TechCrunch

Prellis Biologics have just taken a big step on the path toward developing viable 3D-printed organs for humans.

The company, which was founded in 2016 by research scientists Melanie Matheu and Noelle Mullin, staked its future (and a small $3 million investment) on a new technology to manufacture capillaries, the one-cell-thick blood vessels that are the pathways which oxygen and nutrients move through to nourish tissues in the body.

Without functioning capillary structures, it is impossible to make organs, according to Matheu. They’re the most vital piece of the puzzle in the quest to print viable hearts, livers, kidneys and lungs, she said.

and

Now, Prellis has published findings indicating that it can manufacture those capillaries at a size and speed that would deliver 3D-printed organs to the market within the next five years.

Prellis uses holographic printing technology that creates three-dimensional layers deposited by a light-induced chemical reaction that happens in five milliseconds.

This feature, according to the company, is critical for building tissues like kidneys or lungs. Prellis achieves this by combining a light-sensitive photo-initiator with traditional bioinks that allows the cellular material to undergo a reaction when blasted with infrared light, which catalyzes the polymerization of the bioink.

and

Prellis’ organs will also need to be placed in a bioreactor to sustain them before they’re transplanted into an animal, but the difference is that the company aims to produce complete organs rather than sample tissue or a small cell sample, according to a statement. The bioreactors can simulate the biomechanical pressures that ensure an organ functions properly

More reading about this technology: https://www.prellisbiologics.co/prellis-literature

Intelligent prosthetic ankles, so you can wear a dress shoe, a running shoe, a flat

From “Smart” prosthetic ankle takes fear out of rough terrain, stairs | Vanderbilt News | Vanderbilt University

Prosthetic ankles available now are static, meaning they don’t anticipate movement and adjust the feet to different terrains. Many users swing the prosthetic leg outward ever so slightly during regular walking to make up for feet that don’t naturally roll through the motion of walking.

and

The ankle has a tiny motor, actuator, sensors and chip that work together to either conform to the surface the foot is contacting or remain stationary, depending on what the user needs.

Goldfarb said the problem with finding workable prosthetic ankles is so pervasive that many amputees only wear one type of shoe – whichever one works best with their prosthetic.

“Our prosthetic ankle is intelligent, so you can wear a dress shoe, a running shoe, a flat – whatever you’d like – and the ankle adapts,” Goldfarb said. “You can walk up slopes, down slopes, up stairs and down stairs, and the device figures out what you’re doing and functions the way it should.”

Watch the video in the Techcrunch article.

Instantly correct robot mistakes with nothing more than brain signals and the flick of a finger

From How to control robots with brainwaves and hand gestures | MIT News

By monitoring brain activity, the system can detect in real-time if a person notices an error as a robot does a task. Using an interface that measures muscle activity, the person can then make hand gestures to scroll through and select the correct option for the robot to execute.

For the project the team used “Baxter,” a humanoid robot from Rethink Robotics. With human supervision, the robot went from choosing the correct target 70 percent of the time to more than 97 percent of the time.

To create the system the team harnessed the power of electroencephalography (EEG) for brain activity and electromyography (EMG) for muscle activity, putting a series of electrodes on the users’ scalp and forearm.

Both metrics have some individual shortcomings: EEG signals are not always reliably detectable, while EMG signals can sometimes be difficult to map to motions that are any more specific than “move left or right.” Merging the two, however, allows for more robust bio-sensing and makes it possible for the system to work on new users without training.

Combine passive blocking of sleep-disturbing sounds with sounds engineered to mask what gets past the blocking

From Bose gets into the business of sleep | TechCrunch

Sleep deprivation costs the US economy $411 billion a year. It’s bad for your health and generally turns you into a cranky piece of garbage no one want to be around. So, naturally, Bose wants to be in the sleep business. Tomorrow, the company launches SleepBuds, its first foray into helping people fall and stay asleep.

There’s no active noise cancelling on-board, unlike Bose’s better known efforts. Instead, the on-board soundscapes (things like leaves rustling and trickling waterfalls) are designed to essentially drown out noise.

The Sleepbuds never blocked the sound altogether. Instead it was more of a mix of sounds with the strange effect of hearing someone snoring off in the distance in a wind-swept field. You can always adjust the sound levels on the app, but you don’t want to make things too loud, for obvious reasons.

Interestingly, the company opted not to offer streaming here, instead storing files locally. There are ten preloaded sounds, with the option of adding more. This was primarily done for battery reasons. You should get about 16 hours on a charge, with 16 additional hours via the charging case.

It is the end of the poker face

From Poppy Crum: Technology that knows what you’re feeling | TED Talk

Your pupil doesn’t lie. Your eye gives away your poker face. When your brain’s having to work harder, your autonomic nervous system drives your pupil to dilate. When it’s not, it contracts. When I take away one of the voices, the cognitive effort to understand the talkers gets a lot easier. I could have put the two voices in different spatial locations, I could have made one louder. You would have seen the same thing. We might think we have more agency over the reveal of our internal state than that spider, but maybe we don’t.

Must-watch.

The moment a company brings to market a mainstream AR wearable, like smart contact lenses, that can act as an application platform, like iOS, and supports the installation of third-party applications through a marketplace, like the App Store, there will be a rush to develop AI apps that can read people’s behaviour in real time, in a way that most human brains cannot.

It doesn’t matter if the intentions are good. Such applications would expose vulnerabilities we are not prepared to defend against.

Longevity-as-a-service, via deep learning

From With strategic investment, Insilico Medicine is using deep learning to defeat aging | TechCrunch

In the classical model used by pharmaceutical companies, scientists in an R&D lab investigate naturally occurring molecules while searching for potential therapeutic properties. When they find a molecule that could be a candidate, they begin a series of tests to determine the treatment efficacy of the molecules (and also to receive FDA approval).

Rather than going forward through the process, Insilico works backwards. The company starts with an end objective — say stopping aging — and then uses a toolbox of deep learning algorithms to devise ideal molecules de novo. Those molecules may not exist anywhere in the world, but can be “manufactured” in the lab.

The key underlying technique for the company is what are known as GANs, or generative adversarial networks with reinforcement learning. At a high-level, GANs include a neural net “generator” that creates new products (in this case, molecules), and a discriminator that classifies the new product. Those neural nets then adapt over time in order to compete against each other more effectively.

GANs have been used to create fake photos that look almost photorealistic, but that no camera has ever taken. Zhavoronkov suggested to me that clinical patient data may one day be manufactured — providing far more data while protecting patient privacy.

There’s nothing you can do with a chip in your brain that we can’t do better

From Testing the CTRL-Labs wristband that lets you control computers with your mind – The Verge

CTRL-Labs’ work is built on a technology known as differential electromyography, or EMG. The band’s inside is lined with electrodes, and while they’re touching my skin, they measure electrical pulses along the neurons in my arm. These superlong cells are transmitting orders from my brain to my muscles, so they’re signaling my intentions before I’ve moved or even when I don’t move at all.

EMG is widely used to measure muscle performance, and it’s a promising option for prosthetic limb control. CTRL-Labs isn’t the first company to imagine an EMG-based interface, either. Canadian startup Thalmic Labs sells an EMG gesture-reading armband called the Myo, which detects muscle movements and can handle anything from controlling a computer to translating sign language. (CTRL-Labs used Myo armbands in early prototyping, before designing its own hardware.)

and

One issue is interference from what Bouton refers to as motion artifacts. The bands have to process extraneous data from accidental hand movements, external vibrations, and the electrodes shifting around the skin. “All those things can cause extra signal you don’t want,” he says. An electrode headset, he notes, would face similar problems — but they’re serious issues for either system.

Reardon says CTRL-Labs’ band can pick out far more precise neural activity than the Myo, which Thalmic bills as a muscle-reading system rather than a brain-computer interface. And the band is supposed to work consistently anywhere on the wrist or lower arm, as long as it’s fitted snugly. (The prototype felt like wearing a thick, metallic elastic bracelet.) But Bouton, who uses EMG to find and activate muscles of people with paralysis, says users would get the best results from hitting exactly the same spot every time — which the average person might find difficult. “Even just moving a few millimeters can make a difference,” he says

Long, fascinating profile of CTRL-Labs. I saw them presenting in NYC at the O’Reilly AI Conference, when they announced the availability of their wristband within this years.

From Augmented Reality to Altered Reality

From Dehumanization of Warfare: Legal Implications of New Weapon Technologies:

However, where soldiers are equipped with cybernetic implants (brain-machine interfaces) which mediate between an information source and the brain, the right to “receive and impart information without interference from a public authority” gains a new dimension. There are many technologies which provide additional information to armed forces personnel, e.g., heads-up displays for fighter pilots and the Q-warrior augmented reality helmets from BAE Systems, which are unlikely to impact this right.

However, there are technologies in development which are intended to filter data in order to prevent information overload. This may be particularly relevant where the implant or prosthetic removes visual information from view, or is designed to provide targeting information to the soldier. According to reports, software has been devised in Germany which allows for the deletion of visual information by smart glass or contact lens.

As one futurist was quoted as saying “So if you decide you don’t like homeless people in your city, and you use this software and implant it in your contact lenses, then you won’t see them at all.”

An entire section of this book is dedicated to the legal and ethical implications of using supersoldiers, augmented by bionic prosthetics, augmented reality devices, and neural interfaces, in modern warfare. Highly recommended.

The book is now featured in the “Key Books” section of H+.

Our results are nearly indistinguishable from the real video

From Forget DeepFakes, Deep Video Portraits are way better (and worse) | TechCrunch

Deep Video Portraits is the title of a paper submitted for consideration this August at SIGGRAPH; it describes an improved technique for reproducing the motions, facial expressions, and speech movements of one person using the face of another.

and

There’s no way to make a person do something or make an expression that’s too far from what they do on camera, though. For instance, the system can’t synthesize a big grin if the person is looking sour the whole time (though it might try and fail hilariously). And naturally there are all kinds of little bugs and artifacts. So for now the hijinks are limited.

Astounding results. You must watch the video.

Now, what happens if this video editing happens in real time to alter the reality perceived through AR glasses? For example, the ones a soldier might use.

Instead of replacing one damaged cornea with one healthy one, you could grow enough cells from one donated cornea to print 50 artificial ones

From Scientists have 3D printed the most advanced artificial cornea ever using human cells – The Verge

It was tricky to find the right recipe for an ink that’s thin enough to squirt through a 3D printer’s nozzle, says Che Connon, a tissue engineer at Newcastle University who was one of the creators of the artificial cornea. This bio-ink didn’t just have to be thin — it also had to be stiff enough that it could hold its shape as a 3D structure. To get the right consistency, the researchers added a jelly-like goo called alginate and stem cells extracted from donor corneas, along with some ropy proteins called collagen.

But there’s still a long way to go before these artificial corneas will even get close to a human eyeball: Connon wants to fine tune the printing process first, he says, and the artificial cornea will also need to go through safety studies in animals. But this study is proof that you can 3D print something that looks like a cornea, and contains mostly the same ingredients. It’s also the first time researchers have recreated the cornea’s distinctive, curved shape.

If and when this technique is perfected, tech-augmented corneas in place of smart contact lenses is not an unthinkable scenario.

US Department of Defense has 592 projects powered by Artificial Intelligence

From Pentagon developing artificial intelligence center

Speaking at the House Armed Services Committee April 12, Mattis said “we’re looking at a joint office where we would concentrate all of DoD’s efforts, since we have a number of AI efforts underway right now. We’re looking at pulling them all together.”

He added that the department counts 592 projects as having some form of AI in them, but noted that not all of those make sense to tie into an AI center. And Griffin wants to make sure smaller projects that are close to completion get done and out into prototyping, rather than tied up in the broader AI project.

And then, of course, there are those AI projects so secret that they won’t even be listed among those 592. It would be interesting to see how many of these relate to the super-soldier use case.

A brain-scanner could be an instrument of explicit coercion

From TED 2018: Thought-Reading Machines and the Death of Love | WIRED

The San Francisco startup is developing an optical imaging system—sufficiently compact to fit inside a skull cap, wand, or bandage—that scatters and captures near-infrared light inside our bodies to create holograms that reveal our occluded selves. The devices could diagnose cancers as well as cardiovascular or other diseases. But because the wavelength of near-infrared light is smaller than a micron, smaller than the smallest neuron, Jepsen believes the resolution of the technology is fine enough to make thoughts visible to.

and

the company’s promise depended on combining these elements: proof of the entire body’s translucence; holographic techniques, some dating to the 1960s; and Asian silicon manufacturing, which can make new chip architectures into commercial products. Openwater may be less than two years old, but Jepsen has been thinking about a holographic scanner for decades. She is uniquely suited to the challenge. Her early research was in holography; she led display development at Intel, Google X, and Facebook Oculus; and she has shipped billions of dollars of chips.

and

The idea derives from Jack Gallant, a cognitive neuroscientist at UC Berkeley, who decoded movies shown to subjects in a functional MRI machine by scanning the oxygenated blood in their brains. The images Gallant recovered are blurry, because the resolution of fMRI is comparatively coarse. Holography would not only see blood better but capture the electrochemical pulses of the neurons themselves.

Wearable device picks up neuromuscular signals saying words “in your head”

From Computer system transcribes words users “speak silently” | MIT News

MIT researchers have developed a computer interface that can transcribe words that the user verbalizes internally but does not actually speak aloud.

The system consists of a wearable device and an associated computing system. Electrodes in the device pick up neuromuscular signals in the jaw and face that are triggered by internal verbalizations — saying words “in your head” — but are undetectable to the human eye. The signals are fed to a machine-learning system that has been trained to correlate particular signals with particular words.

The device also includes a pair of bone-conduction headphones, which transmit vibrations through the bones of the face to the inner ear. Because they don’t obstruct the ear canal, the headphones enable the system to convey information to the user without interrupting conversation or otherwise interfering with the user’s auditory experience.

and

Using the prototype wearable interface, the researchers conducted a usability study in which 10 subjects spent about 15 minutes each customizing the arithmetic application to their own neurophysiology, then spent another 90 minutes using it to execute computations. In that study, the system had an average transcription accuracy of about 92 percent.

But, Kapur says, the system’s performance should improve with more training data, which could be collected during its ordinary use. Although he hasn’t crunched the numbers, he estimates that the better-trained system he uses for demonstrations has an accuracy rate higher than that reported in the usability study.

Sci-fi movies shaped the collective imaginary about neural interfaces as some sort of hardware port or dongle sticking out of the neck and connecting the human brain to the Internet. But that approach, assuming it’s even possible, is still far away into the future.

This approach is much more feasible. Imagine if this object, AlterEgo, would become the main computer peripheral, replacing keyboard and mouse.
The question is not just about the accuracy, but also how its speed compared to existing input methods.

Watch the video.

From 3D Printing to Bioprinting and Precision Medicine

From How 3D printing is revolutionizing healthcare as we know it | TechCrunch

3D printing is performed by telling a computer to apply layer upon layer of a specific material (quite often plastic or metal powders), molding them one layer at a time until the final product — be it a toy, a pair of sunglasses or a scoliosis brace — is built. Medical technology is now harnessing this technology and building tiny organs, or “organoids,” using the same techniques, but with stem cells as the production material. These organoids, once built, will in the future be able to grow inside the body of a sick patient and take over when an organic organ, such as a kidney or liver, fails.

researchers in Spain have now taken the mechanics of 3D printing — that same careful layer-upon-layer approach in which we can make just about anything — and revealed a 3D bioprinter prototype that can produce human skin. The researchers, working with a biological ink that contains both human plasma as well as material extracts taken from skin biopsies, were able to print about 100 square centimeters of human skin in the span of about half an hour.

and

A 3D-printed pill, unlike a traditionally manufactured capsule, can house multiple drugs at once, each with different release times. This so-called “polypill” concept has already been tested for patients with diabetes and is showing great promise.

An exoskeleton for athletes and older skiers

From This Affordable Exoskeleton Can Make You A Better Skier

Roam’s founder and CEO is Tim Swift, a longtime veteran of Ekso Bionics, one of the world’s leaders in exoskeletons. Swift loved what Ekso was building, but balked at the hefty price tag that came with systems designed to help the disabled walk. Building devices that aren’t accessible to the masses didn’t make sense to him anymore. So he struck out on his own, aiming to democratize exoskeletons.

and

Roam is using plastics and fabrics, and air for transmission. The company’s core insight, Swift says, is a unique fabric actuator that’s very lightweight, yet strong for its volume and weight. The system relies on valves and a backpack power pack to provide torque to the legs. It also has a machine learning element that’s meant to understand how you ski, and anticipate when you’re going to make a turn in order to deliver the extra torque just when you want it.

When ready for market, the skiing exoskeleton is expected to weigh under 10 pounds, including about four or five pounds of equipment that goes in the backpack.

From This skiing exoskeleton is designed to take the strain off your legs – The Verge

The company claims the exoskeleton will make older skiers feel years younger and able to stay out on the slope for longer. And for athletes, the device will supposedly help them train for days in a row with less fatigue.

So far the company has only built prototypes, but it’s in the process of finalizing a commercial product, set for release in January 2019. Interested skiers can pay $99 to reserve a unit, although the final price is expected to be somewhere between $2,000 and $2,500.

Exoskeletons have a few of clear use cases: people with disabilities, heavy lifting workers, and supersoldiers. Athletes and healthy people that want to enjoy sports in their later years are interesting new possibilities.

MIT terminates collaboration with Nectome

From MIT severs ties to company promoting fatal brain uploading – MIT Technology Review

According to an April 2 statement, MIT will terminate Nectome’s research contract with Media Lab professor and neuroscientist Edward Boyden.

MIT’s connection to the company drew sharp criticism from some neuroscientists, who say brain uploading isn’t possible.

“Fundamentally, the company is based on a proposition that is just false. It is something that just can’t happen,” says Sten Linnarsson of the Karolinska Institute in Sweden.

He adds that by collaborating with Nectome, MIT had lent credibility to the startup and increased the chance that “some people actually kill themselves to donate their brains.”

It didn’t take long.

It’s hard enough to stand the pressure of the press and public opinion for normal companies. It must be impossibly hard to do so when you try to commercialize an attempt to escape death.

Many of the companies that are covered here on H+ face the same challenge.

AR glasses further augmented by human assistants 

From Aira’s new smart glasses give blind users a guide through the visual world | TechCrunch

Aira has built a service that basically puts a human assistant into a blind user’s ear by beaming live-streaming footage from the glasses camera to the company’s agents who can then give audio instructions to the end users. The guides can present them with directions or describe scenes for them. It’s really the combination of the high-tech hardware and highly attentive assistants.

The hardware the company has run this service on in the past has been a bit of a hodgepodge of third-party solutions. This month, the company began testing its own smart glasses solution called the Horizon Smart Glasses, which are designed from the ground-up to be the ideal solution for vision-impaired users.

The company charges based on usage; $89 per month will get users the device and up to 100 minutes of usage. There are various pricing tiers for power users who need a bit more time.

The glasses integrate a 120-degree wide-angle camera so guides can gain a fuller picture of a user’s surroundings and won’t have to instruct them to point their head in a different direction quite as much. It’s powered by what the startup calls the Aira Horizon Controller, which is actually just a repurposed Samsung smartphone that powers the device in terms of compute, battery and network connection. The controller is appropriately controlled entirely through the physical buttons and also can connect to a user’s smartphone if they want to route controls through the Aira mobile app.

Interesting hybrid implementation and business model, but I have serious doubts that a solution depending on human assistants can scale at a planetary level, or retain the necessary quality of service at that scale.

Eye Tracking For AR Devices?

From Eye Tracking Is Coming to Virtual Reality Sooner Than You Think. What Now? | WIRED

That button had activated the eye-tracking technology of of Tobii, the Swedish company where Karlén is a director of product management for VR. Two cameras inside the headset had begun watching my eyes, illuminating them with near-IR light, and making sure that my avatar’s eyes did exactly what mine did.

Tobii isn’t the only eye-tracking company around, but with 900 employees, it may be the largest. And while the Swedish company has been around since 2006, Qualcomm’s prototype headset—and the latest version of its Snapdragon mobile-VR platform, which it unveiled at the Game Developers Conference in San Francisco this week—marks the first time that eye-tracking is being included in a mass-produced consumer VR device.

and

Eye-tracking unlocks “foveated rendering,” a technique in which graphical fidelity is only prioritized for the tiny portion of the display your pupils are focused on. For Tobii’s version, that’s anywhere from one-tenth to one-sixteenth of the display; everything outside that area can be dialed down as much as 40 or 50 percent without you noticing, which means less load on the graphics processor. VR creators can leverage that luxury in order to coax current-gen performance out of a last-gen GPU, or achieve a higher frame rate than they might otherwise be able to.

That’s just the ones and zeros stuff. There are compelling interface benefits as well. Generally, input in VR is a three-step process: look at something, point at it to select it, then click to input the selection. When your eyes become the selection tool, those first two steps become one. It’s almost like a smartphone, where pointing collapses the selection and click into a single step. And because you’re using your eyes and not your head, that means less head motion, less fatigue, less chance for discomfort.

and

There’s also that whole cameras-watching-your-eyes thing. Watching not just what your eyes are doing, but where they look and for how long—in other words, tracking your attention. That’s the kind of information advertisers and marketers would do just about anything to get their hands on. One study has even shown that gaze-tracking can be (mis)used to influence people’s biases and decision-making.

“We take a very hard, open stance,” he says. “Pictures of your eyes never go to developers—only gaze direction. We do not allow applications to store or transfer eye-tracking data or aggregate over multiple users. It’s not storable, and it doesn’t leave the device.”

Tobii does allow for analytic collection, Werner allows; the company has a business unit focused on working with research facilities and universities. He points to eye-tracking’s potential as a diagnostic tool for autism spectrum disorders, to its applications for phobia research. But anyone using that analytical license, he says, must inform users and make eye-tracking data collection an opt-in process.

There is no reason why eye tracking couldn’t do the same things (and pose the same risks) in AR devices.

MIT Intelligence Quest Launch Event Videos

From MIT IQ Launch

On March 1, we convened at Kresge Auditorium on the MIT campus to set out on the MIT Intelligence Quest — an Institute-wide initiative on human and machine intelligence research, its applications, and its bearing on society.

MIT faculty, alumni, students, and friends talked about their work across all aspects of this domain — from unpublished research, to existing commercial enterprises, to the social and ethical implications of AI.

Learn why and how MIT is primed to take the next breakthrough step in advancing the science and applications of intelligence by clicking on the available presentations below.

MIT announced the Intelligence Quest in February. This is the whole launch event, when dozens of presentations were recorded and are now available online.

Must-watch.

Nectome will preserve your brain, but you have to be euthanized first

From A startup is pitching a mind-uploading service that is “100 percent fatal” – MIT Technology Review

Nectome is a preserve-your-brain-and-upload-it company. Its chemical solution can keep a body intact for hundreds of years, maybe thousands, as a statue of frozen glass. The idea is that someday in the future scientists will scan your bricked brain and turn it into a computer simulation. That way, someone a lot like you, though not exactly you, will smell the flowers again in a data server somewhere.

This story has a grisly twist, though. For Nectome’s procedure to work, it’s essential that the brain be fresh. The company says its plan is to connect people with terminal illnesses to a heart-lung machine in order to pump its mix of scientific embalming chemicals into the big carotid arteries in their necks while they are still alive (though under general anesthesia).

The company has consulted with lawyers familiar with California’s two-year-old End of Life Option Act, which permits doctor-assisted suicide for terminal patients, and believes its service will be legal. The product is “100 percent fatal,”

and

In February, they obtained the corpse of an elderly woman and were able to begin preserving her brain just 2.5 hours after her death. It was the first demonstration of their technique, called aldehyde-stabilized cryopreservation, on a human brain.

Fineas Lupeiu, founder of Aeternitas, a company that arranges for people to donate their bodies to science, confirmed that he provided Nectome with the body. He did not disclose the woman’s age or cause of death, or say how much he charged.

The preservation procedure, which takes about six hours, was carried out at a mortuary. “You can think of what we do as a fancy form of embalming that preserves not just the outer details but the inner details,” says McIntyre. He says the woman’s brain is “one of the best-preserved ever,” although her being dead for even a couple of hours damaged it.

Why Augmented-Reality Glasses Are Ugly

From Why Do Augmented-Reality Glasses Look So Bad? | WIRED

“The battle is between immersive functionality and non-dorky, even cool-looking design. The holy grail is something that not only resembles a normal pair of, say, Gucci glasses, but has functionality that augments your life in a meaningful way.”Right now, that demands a trade-off. The best AR displays require bulky optical hardware to optimize resolution and provide a wide field-of-view. That makes it possible to do all kinds of cool things in augmented reality. But early versions, like the Meta 2 AR headset, look more like an Oculus Rift than a pair of Warby Parkers. Slimmer AR displays, like the used in Google Glass, feel more natural to wear, but they sit above or next to the normal field of vision, so they’re are less immersive and less functional. Adding other features to the glasses—a microphone, a decent camera, various sensors—also increases bulk and makes it harder to create something comfortable or stylish.This tension has split the field of AR glasses into two extremes. On one end, you get hulking glasses packed with features to show off the unbridled potential of augmented reality. On the other end, you sacrifice features to make a wearable that looks and feels more like normal eyewear.

What It’s Like Having to Charge Your Arm

From Never Mind Charging Your Phone: Cyborg Angel Giuffria Explains What It’s Like Having to Charge Your Arm – Core77

At SXSW Angel Giuffria, one of America’s better-known cyborgs, encountered a lot of people that wanted her to demo her robotic arm. As a de facto spokeswoman for the prosthetic community, she gamely agreed, with the result being that her batteries wore down faster than normal.

Be sure to read the whole Q&A session that spontaneously developed over Twitter.

Smart glasses designed to help dyslexic people to read words

From These smart glasses convert words into voice for people who are visually impaired – The Verge

The Oton Glass are glasses with two tiny cameras and an earphone on the sides. Half of the lens is a mirror that reflects the user’s eye so that the inner-facing camera can track eye movements and blinks.Image: Oton GlassUsers will look at some text and blink to capture a photo of what’s in front of them, which gets transmitted to a dedicated Raspberry Pi cloud system, analyzed for text, and then converted into a voice that plays through the earpiece. If the system is unable to read those words, a remote worker would be available to troubleshoot.

The Oton was most recently a third-place runner-up for the James Dyson award in 2016:

There exist similar products in the world, but they are not currently commercialized yet. They require a breakthrough of technology and trial-and-error on how to deploy smart glasses. The originality of OTON GLASS consists of two aspects, technology and deployment. First, in the technology realm, startups such as Orcam Inc. and Hours Technology Inc. are currently developing smart glasses for blind people. They mainly develop powerful OCR for the English (Alphabet) using machine learning techniques. On the other hand, OTON GLASS focuses on Japanese character recognition as its unique aspect. OTON GLASS aims to solve the user’s problems by becoming a hybrid (human-to-computer) recognizer and not approaching the problem using OCR Technology. Secondly, in terms deployment, OTON GLASS is all in one that combines camera-to-glasses – meaning they look like normal glasses. This capture trigger based on human’s behavior is natural interaction for people.

China accounted for 48 % of the world’s total AI startup funding in 2017, surpassing the US

From China overtakes US in AI startup funding with a focus on facial recognition and chips – The Verge

The competition between China and the US in AI development is tricky to quantify. While we do have some hard numbers, even they are open to interpretation. The latest comes from technology analysts CB Insights, which reports that China has overtaken the US in the funding of AI startups. The country accounted for 48 percent of the world’s total AI startup funding in 2017, compared to 38 percent for the US.

It’s not a straightforward victory for China, however. In terms of the volume of individual deals, the country only accounts for 9 percent of the total, while the US leads in both the total number of AI startups and total funding overall. The bottom line is that China is ahead when it comes to the dollar value of AI startup funding, which CB Insights says shows the country is “aggressively executing a thoroughly-designed vision for AI.”

I know the guys at CB Insights. Pretty reliable research firm.

AI can predict a heart disease looking at eyes blood vessels with 70% accuracy

From Google’s new AI algorithm predicts heart disease by looking at your eyes – The Verge

Scientists from Google and its health-tech subsidiary Verily have discovered a new way to assess a person’s risk of heart disease using machine learning. By analyzing scans of the back of a patient’s eye, the company’s software is able to accurately deduce data, including an individual’s age, blood pressure, and whether or not they smoke. This can then be used to predict their risk of suffering a major cardiac event — such as a heart attack — with roughly the same accuracy as current leading methods.

and

To train the algorithm, Google and Verily’s scientists used machine learning to analyze a medical dataset of nearly 300,000 patients. This information included eye scans as well as general medical data. As with all deep learning analysis, neural networks were then used to mine this information for patterns, learning to associate telltale signs in the eye scans with the metrics needed to predict cardiovascular risk (e.g., age and blood pressure).

and

When presented with retinal images of two patients, one of whom suffered a cardiovascular event in the following five years, and one of whom did not, Google’s algorithm was able to tell which was which 70 percent of the time. This is only slightly worse than the commonly used SCORE method of predicting cardiovascular risk, which requires a blood test and makes correct predictions in the same test 72 percent of the time.

Now, if you equip a pair of smart glasses with a scanner, you are basically going around with an AI that looks around you and inside you. At the same time. What are the implications?

Self-healing and recyclable electronic skin 

From This electronic skin can heal itself — and then make more skin – The Verge

researchers have created an electronic skin that can be completely recycled. The e-skin can also heal itself if it’s torn apart.The device, described today in the journal Science Advances, is basically a thin film equipped with sensors that can measure pressure, temperature, humidity, and air flow. The film is made of three commercially available compounds mixed together in a matrix and laced with silver nanoparticles: when the e-skin is cut in two, adding the three compounds to the “wound” allows the e-skin to heal itself by recreating chemical bonds between the two sides. That way, the matrix is restored and the e-skin is as good as new. If the e-skin is broken beyond repair, it can just be soaked in a solution that “liquefies” it so that the materials can be reused to make new e-skin. One day, this electronic skin could be used in prosthetics, robots, or smart textiles.

Nanorobots have potential as intelligent drug delivery systems

From New DNA nanorobots successfully target and kill off cancerous tumors | TechCrunch

Using tumor-bearing mouse models, we demonstrate that intravenously injected DNA nanorobots deliver thrombin specifically to tumor-associated blood vessels and induce intravascular thrombosis, resulting in tumor necrosis and inhibition of tumor growth

and

DNA nanorobots are a somewhat new concept for drug delivery. They work by getting programmed DNA to fold into itself like origami and then deploying it like a tiny machine, ready for action.

The [chinese] scientists behind this study tested the delivery bots by injecting them into mice with human breast cancer tumors. Within 48 hours, the bots had successfully grabbed onto vascular cells at the tumor sites, causing blood clots in the tumor’s vessels and cutting off their blood supply, leading to their death.

Remarkably, the bots did not cause clotting in other parts of the body, just the cancerous cells they’d been programmed to target

CRISPR pioneers now use it to detect infections like HPV, dengue, and Zika

From New CRISPR tools can detect infections like HPV, dengue, and Zika – The Verge

The new tools, developed by the labs of CRISPR pioneers Jennifer Doudna and Feng Zhang, are showcased in two studies published today in the journal Science. In one paper, Doudna’s team describes a system called DETECTR, which can accurately identify different types of the HPV virus in human samples. In the second paper, Zhang’s team shows an upgraded version of SHERLOCK — which was shown last year to detect viruses like Zika and dengue, as well as other harmful bacteria — in human samples.

and

The CRISPR used in the first Science study is called CRISPR-Cas12a. Doudna’s team discovered that when this type of CRISPR snips double-stranded DNA, it does something interesting: it starts shredding single-stranded DNA as well

the CRISPR system is programmed to detect the HPV DNA inside a person’s cells. When CRISPR detects it, it also cuts a “reporter molecule” with single-stranded DNA that releases a fluorescent signal. So if the cells are infected with HPV, scientists are able to see the signal and quickly diagnose a patient. For now, DETECTR was tested in a tube containing DNA from infected human cells, showing it could detect HPV16 with 100 percent accuracy, and HPV18 with 92 percent accuracy.

and

Called SHERLOCK, this system uses a variety of CRISPR enzymes, including Cas12a. Last year, Zhang’s team showed that SHERLOCK uses CRISPR-Cas13a to find the genetic sequence of Zika, dengue, and several other bacteria, as well as the sequences associated with a cancer mutation in a variety of human samples, such as saliva. Now, the team has improved the tool to be 100 times more sensitive and detect multiple viruses — such as Zika and dengue — in one sample simultaneously. It does this by combining different types of CRISPR enzymes, which are unleashed together to target distinct bits of DNA and RNA, another of the major biological molecules found in all forms of life. Some enzymes also work together to make the tool more sensitive.

If you read Doudna’s book, featured in the H+ “Key Books” section, you realise the enormous progress we made in the last 10 years in terms of DNA manipulation thanks to CRISPR, and yet you have a clear understanding that we are just scratching the surface of what is possible.

Sequence your genome for less than $1,000 and sell it via blockchain

From Human sequencing pioneer George Church wants to give you the power to sell your DNA on the blockchain | TechCrunch

MIT professor and godfather of the Human Genome Project George Church wants to put your genes on it.

His new startup Nebula Genomics plans to sequence your genome for less than $1,000 (the current going rate of whole genome sequencing) and then add your data to the blockchain through the purchase of a “Nebula Token.”

Church and his colleagues laid out in a recently released white paper that this will put the genomic power in the hands of the consumer, as opposed to companies like 23andMe and AncestryDNA, which own your genomic data after you take that spit tube test.

These companies sell that data in large swaths to pharmaceutical and research companies, often for millions of dollars. However, using the blockchain, consumers can choose to sell their own data directly.

and

Those buying up tokens and sequencing their DNA through Nebula don’t have to sell it for money, of course, and Nebula says they can still discover insights about their own genetics through the company app without sharing it elsewhere, if they desire.

However, all bought and sold data will be recorded on the blockchain, which is a technology allowing for the recording of all transactions using a key code known only to the person who holds the information.

Two thoughts:

  • If this idea generates even a tiny bit of money for each individual involved, it might unlock unprecedented access to genetic information for advanced engineering.
  • Our genome is the second last thing we’ve left to sell. The last one is our attention. But once they have our genome, our attention may come for free.

A biohacker injected himself with a DIY herpes treatment in front of a conference audience

From A biohacker injected himself with a DIY herpes treatment in front of a live audience – The Verge

Aaron Traywick, 28, who leads biotech firm Ascendance Biomedical, used an experimental herpes treatment that did not go through the typical route of clinical trials to test its safety.

Instead of being developed by research scientists in laboratories, it was created by a biohacker named Andreas Stuermer, who “holds a masters degree and is a bioentrepreneur and science lover,” according to a conference bio. This is typical of the Ascendance approach. The company believes that FDA regulations for developing treatments are too slow and that having biohackers do the research and experiment on themselves can speed up the process to everyone’s benefit. In the past, the company’s plans have included trying to reverse menopause, a method that is now actually in clinical trials.

From Biohackers Disregard FDA Warning on DIY Gene Therapy – MIT Technology Review

Experts say any gene therapy prepared by amateurs would probably not be potent enough to have much effect, but it could create risks such as an immune reaction to the foreign DNA. “I think warning people about this is the right thing,” says David Gortler, a drug safety expert with the consulting group Former FDA. “The bottom line is, this hasn’t been tested.”

The problem facing regulators is that interest in biohacking is spreading, and it’s increasingly easy for anyone to obtain DNA over the internet.

The last sentence is key. As in the tech industry, once you trigger bottom-up adoption the process is irreversible. And disruptive.

Police in China have begun using sunglasses equipped with facial recognition technology

From Chinese police spot suspects with surveillance sunglasses – BBC News

The glasses are connected to an internal database of suspects, meaning officers can quickly scan crowds while looking for fugitives.

The sunglasses have already helped police capture seven suspects, according to Chinese state media.

The seven people who were apprehended are accused of crimes ranging from hit-and-runs to human trafficking.

and

The technology allows police officers to take a photograph of a suspicious individual and then compare it to pictures stored in an internal database. If there is a match, information such as the person’s name and address will then be sent to the officer.

An estimated 170 million CCTV cameras are already in place and some 400 million new ones are expected be installed in the next three years.

Many of the cameras use artificial intelligence, including facial recognition technology.

In December 2017, I published Our Machines Can Very Easily Recognise You Among At Least 2 Billion People in a Matter of Seconds. It didn’t take long to go from press claims to real-world implementation.

Human augmentation 2.0 is already here, just not evenly distributed.

MIT launches Intelligence Quest, an initiative to discover the foundations of human intelligence

From Institute launches the MIT Intelligence Quest | MIT News

At a time of rapid advances in intelligence research across many disciplines, the Intelligence Quest — MIT IQ — will encourage researchers to investigate the societal implications of their work as they pursue hard problems lying beyond the current horizon of what is known.

Some of these advances may be foundational in nature, involving new insight into human intelligence, and new methods to allow machines to learn effectively. Others may be practical tools for use in a wide array of research endeavors, such as disease diagnosis, drug discovery, materials and manufacturing design, automated systems, synthetic biology, and finance.

and

MIT is poised to lead this work through two linked entities within MIT IQ. One of them, “The Core,” will advance the science and engineering of both human and machine intelligence. A key output of this work will be machine-learning algorithms. At the same time, MIT IQ seeks to advance our understanding of human intelligence by using insights from computer science.

The second entity, “The Bridge” will be dedicated to the application of MIT discoveries in natural and artificial intelligence to all disciplines, and it will host state-of-the-art tools from industry and research labs worldwide.

The Bridge will provide a variety of assets to the MIT community, including intelligence technologies, platforms, and infrastructure; education for students, faculty, and staff about AI tools; rich and unique data sets; technical support; and specialized hardware

and

In order to power MIT IQ and achieve results that are consistent with its ambitions, the Institute will raise financial support through corporate sponsorship and philanthropic giving.

MIT IQ will build on the model that was established with the MIT–IBM Watson AI Lab

What a phenomenal initiative. And MIT is one of the top places in the world to be for AI research.

Artificial General Intelligence might come out of this project.

Ultimately we want a (neuromorphic) chip as big as a fingernail to replace one big (AI) supercomputer

From Engineers design artificial synapse for “brain-on-a-chip” hardware | MIT News

Researchers in the emerging field of “neuromorphic computing” have attempted to design computer chips that work like the human brain. Instead of carrying out computations based on binary, on/off signaling, like digital chips do today, the elements of a “brain on a chip” would work in an analog fashion, exchanging a gradient of signals, or “weights,” much like neurons that activate in various ways depending on the type and number of ions that flow across a synapse.

In this way, small neuromorphic chips could, like the brain, efficiently process millions of streams of parallel computations that are currently only possible with large banks of supercomputers. But one significant hangup on the way to such portable artificial intelligence has been the neural synapse, which has been particularly tricky to reproduce in hardware.

Now engineers at MIT have designed an artificial synapse in such a way that they can precisely control the strength of an electric current flowing across it, similar to the way ions flow between neurons. The team has built a small chip with artificial synapses, made from silicon germanium. In simulations, the researchers found that the chip and its synapses could be used to recognize samples of handwriting, with 95 percent accuracy

and

Instead of using amorphous materials as an artificial synapse, Kim and his colleagues looked to single-crystalline silicon, a defect-free conducting material made from atoms arranged in a continuously ordered alignment. The team sought to create a precise, one-dimensional line defect, or dislocation, through the silicon, through which ions could predictably flow.

The researchers fabricated a neuromorphic chip consisting of artificial synapses made from silicon germanium, each synapse measuring about 25 nanometers across. They applied voltage to each synapse and found that all synapses exhibited more or less the same current, or flow of ions, with about a 4 percent variation between synapses — a much more uniform performance compared with synapses made from amorphous material.

They also tested a single synapse over multiple trials, applying the same voltage over 700 cycles, and found the synapse exhibited the same current, with just 1 percent variation from cycle to cycle.

Commercialization is very far away from this, but what we are talking here is building the foundation for artificial general intelligence (AGI), and before that, for narrow AI that can be embedded in clothes and everyday objects, not just in smartphones and other electronic devices.

Imagine the possibilities if an AI chip would be as cheap, small and ubiquitous as Bluetooth chips are today.

Undetectable mass social influence powered by artificial intelligence is an existential threat to humanity

Julian Assange on Twitter

The future of humanity is the struggle between humans that control machines and machines that control humans.
While the internet has brought about a revolution in our ability to educate each other, the consequent democratic explosion has shaken existing establishments to their core. Burgeoning digital super states such as Google, Facebook and their Chinese equivalents, who are integrated with the existing order, have moved to re-establish discourse control. This is not simply a corrective action. Undetectable mass social influence powered by artificial intelligence is an existential threat to humanity.
While still in its infancy, the geometric nature of this trend is clear. The phenomenon differs from traditional attempts to shape culture and politics by operating at a scale, speed, and increasingly at a subtlety, that appears highly likely to eclipse human counter-measures.
Nuclear war, climate change or global pandemics are existential threats that we can work through with discussion and thought. Discourse is humanity’s immune system for existential threats. Diseases that infect the immune system are usually fatal. In this case, at a planetary scale.

Self-doubting AI vs certain AI

From Google and Others Are Building AI Systems That Doubt Themselves – MIT Technology Review

Researchers at Uber and Google are working on modifications to the two most popular deep-learning frameworks that will enable them to handle probability. This will provide a way for the smartest AI programs to measure their confidence in a prediction or a decision—essentially, to know when they should doubt themselves.

and

“You would like a system that gives you a measure of how certain it is,” says Dustin Tran, who is working on this problem at Google. “If a self-driving car doesn’t know its level of uncertainty, it can make a fatal error, and that can be catastrophic.”

and

Goodman explains that giving deep learning the ability to handle probability can make it smarter in several ways. It could, for instance, help a program recognize things, with a reasonable degree of certainty, from just a few examples rather than many thousands. Offering a measure of certainty rather than a yes-or-no answer should also help with engineering complex systems.

Improving brain-computer interfaces by decrypting neural patterns

From Cracking the Brain’s Enigma Code – Scientific American

Many human movements, such as walking or reaching, follow predictable patterns, too. Limb position, speed and several other movement features tend to play out in an orderly way. With this regularity in mind, Eva Dyer, a neuroscientist at the Georgia Institute of Technology, decided to try a cryptography-inspired strategy for neural decoding.

Existing brain-computer interfaces typically use so-called ‘supervised decoders.’ These algorithms rely on detailed moment-by-moment movement information such as limb position and speed, which is collected simultaneously with recorded neural activity. Gathering these data can be a time-consuming, laborious process. This information is then used to train the decoder to translate neural patterns into their corresponding movements. (In cryptography terms, this would be like comparing a number of already decrypted messages to their encrypted versions to reverse-engineer the key.)

By contrast, Dyer’s team sought to predict movements using only the encrypted messages (the neural activity), and a general understanding of the patterns that pop up in certain movements.

and

Her team trained three macaque monkeys to either reach their arm or bend their wrist to guide a cursor to a number of targets arranged about a central point. At the same time, the researchers used implanted electrode arrays to record the activity of about 100 neurons in each monkey’s motor cortex, a key brain region that controls movement.

To find their decoding algorithm, the researchers performed an analysis on the neural activity to extract and pare down its core mathematical structure. Then they tested a slew of computational models to find the one that most closely aligned the neural patterns to the movement patterns.

and

Because Dyer’s decoder only required general statistics about movements, which tend to be similar across animals or across people, the researchers were also able to use movement patterns from one monkey to decipher reaches from the neural data of another monkey—something that is not feasible with traditional supervised decoders.

“There are people alive today who will live for 1,000 years”

From Aubrey de Grey: scientist who says humans can live for 1,000 years

Most approaches aimed at combating ageing focus on arresting the harmful byproducts of metabolism, he says. These cause cellular damage and decay, which, in turn, accumulate to trigger the age-related disorders, such as cancer or dementia, that tend to finish us off.

For de Grey, this strategy turns anti-ageing treatment into an impossible game of Whac-A-Mole. Because we understand metabolism so poorly, our efforts to interfere with it remain crude and the process of decay races through the body far quicker than treatments to avert it can keep up.

Instead of stopping the damage, the approach that de Grey has developed at his research centre — Strategies for Engineered Negligible Senescence (SENS), a public charity that he co-founded in 2009 — focuses on repair. This “engineering” approach is designed to keep the process of degradation below the threshold at which it turns into life-threatening disease. “If you can repair the microscopic damage then you are sidestepping the bigger problem [of prevention]”.

Assuming for a moment that some people alive today will be able to extend their lifespan to 200 years, or even 1,000 years, what would they do with such an enormity of time?

Today humans don’t really have a “life strategy”. They just live, allocating their lifetime to various activities according to what society has established. But what happens when your time extends well beyond the expectations of your society?

You may want to watch For de Grey’s TED Talk, too: A roadmap to end aging

Infusions of blood plasma from young donors to rejuvenate the body

From Exclusive: Inside the clinic offering young blood to cure ageing | New Scientist

So it’s a bit odd that this is the epicentre of a phenomenon rocking Silicon Valley: young blood treatments. JR is one of about 100 people who have each paid $8000 to join a controversial trial, offering them infusions of blood plasma from donors aged between 16 and 25 in a bid to turn back the clock. Participants have come from much further afield, including Russia and Australia.

and

in 2014, a team led by Tony Wyss-Coray, a neuroscientist at Stanford University, injected middle-aged mice with plasma from young mice. Sure enough, after three weeks they had anatomical improvements in the brain and a cognitive boost, compared with mice given a placebo.

The plasma didn’t even need to come from the same species – old mice became just as sprightly when the injection came from young humans. “We saw these astounding effects,” Wyss-Coray told New Scientist in 2014. “The human blood had beneficial effects on every organ we’ve studied so far.”

and

Ambrosia is a start-up headquartered in Washington DC. The trial didn’t need regulatory approval because plasma is already a standard treatment to replace missing proteins in people with rare genetic diseases. And there’s no placebo arm to it. All you need to join is a date of birth that makes you over 35 – and a spare $8000.

For your money, you are infused with 2 litres of plasma left over from young people who have donated to blood centres (see “Blood myths”). Unlike the trials looking at young blood’s effects on specific diseases, Ambrosia has a softer target: the general malaise of being old. In addition to measuring changes in about 100 biomarkers in blood, the firm is also “looking for general improvements”, says Jesse Karmazin, who runs the start-up.

The methodology falls short of the normal standards of scientific rigour, so it’s unsurprising that scientists and ethicists have accused Karmazin’s team of taking advantage of public excitement around the idea.

The numbers were as unverifiable as they were impressive: one month after treatment, 70 participants saw reductions in blood factors associated with risk of cancer, Alzheimer’s disease and heart disease, and reductions in cholesterol were on par with those from statin therapy.

and

Risks commonly associated with plasma transfusion include transfusion-related acute lung injury, which is fatal; transfusion-associated circulatory overload; and allergic reactions. Rare complications include catching an infectious disease: blood products carry a greater than 1 in a million chance of HIV transmission. That’s too risky for JR, who tells me that before every treatment he takes a dose of the HIV prophylactic PrEP.

and

There could be risks of developing autoimmune disorders. And some fear that pumping stimulating proteins into people for years could lead to cancer. “If you keep infusing blood, the risk of reactions goes up,” says Dobri Kiprov, an immunologist at California Pacific Medical Center in San Francisco. “Many of these people are just eager to get younger – they don’t have a particular disease, so it’s not justified.”

It sounds dangerous and unproven, but there are multiple high profile startups researching this road:

Google’s life-extension biotech arm Calico, among others, she developed an experiment in which a pump ferried half the blood from one individual into another.

and

anti-ageing start-up Unity Biotechnology, which is backed by Amazon founder Jeff Bezos’s investment company. They are developing a blood-exchange device, a kind of dialysis machine for old age, which cycles your blood through a filter that washes a laundry list of harmful compounds out of the plasma before returning it to you. This would carry no immune effects or disease risks, because it’s your own blood. No regulatory approval is needed, because dialysis filters that remove proteins from plasma are already in use, for example to remove cholesterol in people with certain hereditary diseases.

They are also developing sensors to notify you when levels of bad biomarkers are getting too high – a decrepitude meter to tell you when it’s time for a decrepitude wash.

You may want to watch Tony Wyss-Coray TED Talk, too: How young blood might help reverse aging. Yes, really

The US Air Force has a division dedicated to human performance enhancement

From 711th Human Performance Wing

The 711th Human Performance Wing (711 HPW), headquartered at Wright-Patterson Air Force Base in Ohio, is the first human-centric warfare wing to consolidate human performance research, education and consultation under a single organization. Established under the Air Force Research Laboratory, the 711 HPW is comprised of the Airman Systems Directorate (RH), the United States Air Force School of Aerospace Medicine (USAFSAM) and the Human Systems Integration Directorate (HP). The Wing delivers unparalleled capability to the Air Force through a combination of world class infrastructure and expertise of its diverse workforce of military, civilian and contractor personnel encompassing 75 occupational specialties including science and engineering, occupational health and safety, medical professions, technicians, educators, and business operations and support.

VISION
To be a world leader for human performance.

MISSION
To advance human performance in air, space, and cyberspace through research, education, and consultation. The Wing supports the most critical Air Force resource – the Airman of our operational military forces. The Wing’s primary focus areas are aerospace medicine, Human Effectiveness Science and Technology, and Human Systems Integration. In conjunction with the Naval Medical Research Unit – Dayton and surrounding universities and medical institutions, the 711 HPW functions as a Joint Department of Defense Center of Excellence for human performance sustainment and readiness, optimization, readiness.

Notice the inclusion of “cyberspace” among the environments where they try to advance human performance.

Smart diapers for the elderly – when smart monitoring is too much monitoring?

From Pixie Scientific announces availability for purchase in the UK of Pixie Pads, the first adult

Pixie Pads will help incontinent adults, including Alzheimer’s and other dementia sufferers, for whom behavioral symptoms of UTI are often confused with progression of dementia. Patients suffering the effects of stroke, spinal cord injury, or developmental disabilities, and men recovering from radical prostatectomy will also benefit from continuous monitoring enabled by Pixie Pads.

and

Disposable Pixie Pads contain an indicator panel that is scanned by a caregiver using the mobile Pixie App at changing time. The app stores urinalysis data in a secure online service for review and long-term monitoring. It issues an alert to a professional caregiver if there are signs of an infection that require further attention.

This was happening in mid 2016. One year later, Pixie Scientific got the FDA approval to sell in the US as well and started shipping the pads.

Notice that the company initially targeted a completely different market, newborn children, but I guess it wasn’t received too well. While monitoring the body can help diagnose and cure illnesses early on, it’s a big cultural shift from the state of “blindness” we are used to. Too much monitoring can create a state of anxiety and hyper-reaction to any exception to the baseline, not just legit symptoms.

CRISPR might be employed to destroy entire species

From A Crack in Creation:

Ironically, CRISPR might also enable the opposite: forcible extinction of unwanted animals or pathogens. Yes, someday soon, CRISPR might be employed to destroy entire species—an application I never could have imagined when my lab first entered the fledgling field of bacterial adaptive immune systems just ten years ago. Some of the efforts in these and other areas of the natural world have tremendous potential for improving human health and well-being. Others are frivolous, whimsical, or even downright dangerous. And I have become increasingly aware of the need to understand the risks of gene editing, especially in light of its accelerating use. CRISPR gives us the power to radically and irreversibly alter the biosphere that we inhabit by providing a way to rewrite the very molecules of life any way we wish. At the moment, I don’t think there is nearly enough discussion of the possibilities it presents—for good, but also for ill.

We have a responsibility to consider the ramifications in advance and to engage in a global, public, and inclusive conversation about how to best harness gene editing in the natural world, before it’s too late.

and

If the first of these gene drives (for pigmentation) seems benign and the second (for malaria resistance) seems beneficial, consider a third example. Working independently of the California scientists, a British team of researchers—among them Austin Bud, the biologist who pioneered the gene drive concept—created highly transmissive CRISPR gene drives that spread genes for female sterility. Since the sterility trait was recessive, the genes would rapidly spread through the population, increasing in frequency until enough females acquired two copies, at which point the population would suddenly crash. Instead of eradicating malaria by genetically altering mosquitoes to prevent them from carrying the disease, this strategy presented a blunter instrument—one that would cull entire populations by hindering reproduction. If sustained in wild-mosquito populations, it could eventually lead to outright extermination of an entire mosquito species.

and

It’s been estimated that, had a fruit fly escaped the San Diego lab during the first gene drive experiments, it would have spread genes encoding CRISPR, along with yellow-body trait, to between 20 and 50 percent of all fruit flies worldwide.

The author of this book, Jennifer Doudna, is one of the first scientists that discovered the groundbreaking gene editing technique CRISPR-Cas9. The book is a fascinating narration of how CRISPR came to be, and it’s listed in the Key Books section of H+.

The book was finished in September 2016 (and published in June 2017), so the warning is quite recent.

You may also want to watch Doudna’s TED Talk about the bioethics of CRISPR: How CRISPR lets us edit our DNA.

Using Artificial Intelligence to augment human intelligence

From Using Artificial Intelligence to Augment Human Intelligence

in one common view of AI our computers will continue to get better at solving problems, but human beings will remain largely unchanged. In a second common view, human beings will be modified at the hardware level, perhaps directly through neural interfaces, or indirectly through whole brain emulation.

We’ve described a third view, in which AIs actually change humanity, helping us invent new cognitive technologies, which expand the range of human thought. Perhaps one day those cognitive technologies will, in turn, speed up the development of AI, in a virtuous feedback cycle

and

The interface-oriented work we’ve discussed is outside the narrative used to judge most existing work in artificial intelligence. It doesn’t involve beating some benchmark for a classification or regression problem. It doesn’t involve impressive feats like beating human champions at games such as Go. Rather, it involves a much more subjective and difficult-to-measure criterion: is it helping humans think and create in new ways?

This creates difficulties for doing this kind of work, particularly in a research setting. Where should one publish? What community does one belong to? What standards should be applied to judge such work? What distinguishes good work from bad?

A truly remarkable idea that would be infinitely more powerful if not buried under a wall of complexity, making it out of reach for very many readers.

This could be a seminal paper.

UK company pioneers tissue engineering with 3D bioprinters 

From Applications | 3Dynamic Systems Ltd

3Dynamic Systems is currently developing a range of 3D bioprinted vascular scaffold as part of its new product line. We have been developing 3D bioprinting as a research tool since 2012 and have now pushed forward with the commercialisation of the first 3D tissue structures. Called the vascular scaffold, it is the first commercial tissue product to be developed by us. 3DS research has accelerated recently and work is now focussing on the fabrication of heterogeneous tissues for use in surgery.

Currently we manufacture 20mm length sections of bioprinted vessels, which if successful will lead to larger and more complex vessels to be bioprinted in 3D. Our research concentrates on using the natural self-organising properties of cells in order to produce functional tissues.

At 3DS, we have a long-term goal that this technology will one day be suitable for surgical therapy and transplantation. Blood vessels are made up of different cell types and our new Omega allows for many types of cells to be deposited in 3D. Biopsied tissue materials is gathered from a host, with stem cells isolated and multiplied. These cells are cultured and placed in a bioreactor, which provides oxygen and other nutrients to keep them alive. The millions of cells that are produced are then added to our bioink and bioprinted into the correct 3D geometry.

Over the next two years we will begin the long road towards the commercialisation of our 3D bioprinted vessels. Further development of their technology will harness tissues for operative repair and in the short-term tissues for pharmaceutical trials. This next step in the development of this process could one day transform the field of reconstructive medicine which may lead to direct bioengineering replacement human tissues on-demand for transplantation.

The next opportunity for our research is in developing organ on a chip technology to test drugs and treatments. So far we have initial data based on our vascular structures. In the future this method may be used to analyse any side-effects of new pharmaceutical products.

3Dynamic Systems building 3D bioprinters that automatically produce 3D tissue structures. The company also build perfusion bioreactors that test tissue structures over periods of months for the effects of stimulation and the test the influence of drugs on 3D cell behaviour.

Normally, I don’t quote the website of companies working in the field of research and commercial application covered by H+. But these guys followed @hplus on Twitter without asking for any coverage and have a crystal clear website. I wish more companies were like this.

Ford testing exoskeletons for workers in final assembly plant

From Are exoskeletons the future of physical labor? – The Verge

The vest that Paul Collins has been wearing at Ford is made by Ekso Bionics, a Richmond, California-based company. It’s an electronic-free contraption, and the soft part that hugs his chest looks like the front of a backpack. But the back of it has a metal rod for a spine, and a small, curved pillow rests behind his neck. Extending from the spine are spring-loaded arm mechanics, ones that help Collins lift his arms to install carbon cans on Ford C-Max cars, and rubber grommets on Ford Focuses — about 70 cars an hour.

and

since 2011, Ford has been working, in some capacity, on wearable robotics solutions. But rather than trying to develop something that would give workers superhuman strength, the idea is to prevent injury. “In 2016, our injury statistics were the lowest we’ve seen on record. We’ve had an 83 percent decrease in some of these metrics over the past five years, which is all great,” Smets said. “But if you look at the body parts that are still getting injured, it’s predominantly the shoulder. That’s our number one joint for injury. It’s also the longest to return to full functionality, and the most costly.”

The Ekso vest I tried costs around $6,500 and weighs nine pounds. Smets handed me a power tool, flipped a physical switch on the arm of the vest, and told me to raise my arms over my head as though I was on an assembly line. At some point during my movement, the exosuit kicked into action, its spring mechanism lifting my arms the rest of the way. I could leave my arms in place above my head, too, fully supported. My fingers started to tingle after awhile in that position.

Watch the video.