Limb reanimation through neuroscience and machine learning

From First paralysed person to be ‘reanimated’ offers neuroscience insights : Nature

A quadriplegic man who has become the first person to be implanted with technology that sends signals from the brain to muscles — allowing him to regain some movement in his right arm hand and wrist — is providing novel insights about how the brain reacts to injury.

Two years ago, 24-year-old Ian Burkhart from Dublin, Ohio, had a microchip implanted in his brain, which facilitates the ‘reanimation’ of his right hand, wrist and fingers when he is wired up to equipment in the laboratory.

and

Bouton and his colleagues took fMRI (functional magnetic resonance imaging) scans of Burkhart’s brain while he tried to mirror videos of hand movements. This identified a precise area of the motor cortex — the area of the brain that controls movement — linked to these movements. Surgery was then performed to implant a flexible chip that detects the pattern of electrical activity arising when Burkhart thinks about moving his hand, and relays it through a cable to a computer. Machine-learning algorithms then translate the signal into electrical messages, which are transmitted to a flexible sleeve that wraps around Burkhart’s right forearm and stimulates his muscles.

Burkhart is currently able to make isolated finger movements and perform six different wrist and hand motions, enabling him to, among other things, pick up a glass of water, and even play a guitar-based video game.

This story is one year and a half old, but I just found out about it and I think it’s a critical piece of the big picture that H+ is trying to narrate.

A growing number of artificial intelligence researchers focus on algorithmic bias

Kate Crawford, Distinguished Research Professor at New York University, a Principal Researcher at Microsoft Research New York, and a Visiting Professor at the MIT Media Lab, presented The Trouble with Bias at the NIPS 2017, the most influential and attended (over 8,000 people) conference on artificial intelligence.

Prof. Crawford is not the only one looking into algorithmic bias. As she shows in her presentation, a growing number of research papers focus on it, and even government agencies have started questioning how AI decisions are made.

Why do I talk about algorithmic bias so frequently on H+? Because in a future were AI augments human brain capabilities, through neural interfaces or other means, the algorithmic bias would manipulate people’s worldview in ways that mass media and politics can’t even dream about.

Before we merge human biology with technology we need to ask really difficult questions about how technology operates outside the body.

A task force to review New York City agencies’ use of algorithms and their bias

From New York City Takes on Algorithmic Discrimination | American Civil Liberties Union

The New York City Council yesterday passed legislation that we are hopeful will move us toward addressing these problems. New York City already uses algorithms to help with a broad range of tasks: deciding who stays in and who gets out of jail, teacher evaluations, firefighting, identifying serious pregnancy complications, and much more. The NYPD also previously used an algorithm-fueled software program developed by Palantir Technologies that takes arrest records, license-plate scans, and other data, and then graphs that data to supposedly help reveal connections between people and even crimes. The department since developed its own software to perform a similar task.

The bill, which is expected to be signed by Mayor Bill de Blasio, will provide a greater understanding of how the city’s agencies use algorithms to deliver services while increasing transparency around them. This bill is the first in the nation to acknowledge the need for transparency when governments use algorithms and to consider how to assess whether their use results in biased outcomes and how negative impacts can be remedied.

The legislation will create a task force to review New York City agencies’ use of algorithms and the policy issues they implicate. The task force will be made up of experts on transparency, fairness, and staff from non-profits that work with people most likely to be harmed by flawed algorithms. It will develop a set of recommendations addressing when and how algorithms should be made public, how to assess whether they are biased, and the impact of such bias.

Timely, as more and more AI researchers look into algorithmic bias.

Importance of Artificial Intelligence to Department of Defense

From Perspectives on Research in Artificial Intelligence and Artificial General Intelligence Relevant to DoD:

That AI and—if it were to advance significantly—AGI are of importance to DoD is so self-evident that it needs little elucidation here. Weapons systems and platforms with varying degrees of autonomy exist today in all domains of modern warfare, including air, sea (surface and underwater), and ground.

To cite a few out of many possible examples: Northrop Grumman’s X-47B is a strike fighter-sized unmanned aircraft, part of the U.S. Navy’s Unmanned Combat Air System (UCAS) Carrier Demonstration program. Currently undergoing flight testing, it is capable of aircraft carrier launch and recovery, as well as autonomous aerial refueling.4 DARPA’s Anti-Submarine Warfare Continuous Trail Unmanned Vessel (ACTUV) program recently commissioned the “Sea Hunter”, a 130 ft. unmanned trimaran optimized to robustly track quiet diesel electric submarines.
The Samsung SGR-A1 is a South Korean military robot sentry designed to replace human counterparts in the Korean demilitarized zone.
It is capable of challenging humans for a spoken password and, if it does not recognize the correct password in response, shooting them with either rubber bullets or lethal ammunition.

It is an important point that, while these systems have some degree of “autonomy” relying on the technologies of AI, they are in no sense a step—not even a small step—towards “autonomy” in the sense of AGI, that is, the ability to set independent goals or intent. Indeed, the word “autonomy” conflates two quite different meanings, one relating to “freedom of will or action” (like humans, or as in AGI), and the other the much more prosaic ability to act in accordance with a possibly complex rule set based on possibly complex sensor input, as in the word “automatic”. In using a terminology like “autonomous weapons”, the DoD may, as an unintended consequence, enhance the public’s confusion on this point.

and

At a higher strategic level, AI is recognized by DoD as a key enabling technology in a possible Third Offset Strategy.

As briefed to JASON, key elements of a Third Offset Strategy include:
(i) autonomous learning systems, e.g., in applications that require faster-than-human reaction times; (ii) human-machine collaborative decision making; (iii) assisted human operations, especially in combat; (iv) advanced strategies for collaboration between manned and unmanned platforms; and (v) network-enabled, autonomous weapons capable of operating in future cyber and electronic warfare environments. AI, as it is currently understood as a field of “6.1” basic research, will supply enabling technologies for all of these elements. At the same time, none of these elements are dependent on future advances in AGI.

JASON is an independent scientific advisory group that provides consulting services to the U.S. government on matters of defense science and technology. It was established in 1960.

JASON typically performs most of its work during an annual summer study, and has conducted studies under contract to the Department of Defense (frequently DARPA and the U.S. Navy), the Department of Energy, the U.S. Intelligence Community, and the FBI. Approximately half of the resulting JASON reports are unclassified.

DARPA has become the world’s largest funder of “gene drive” research

From US military agency invests $100m in genetic extinction technologies | Science | The Guardian

A US military agency is investing $100m in genetic extinction technologies that could wipe out malarial mosquitoes, invasive rodents or other species, emails released under freedom of information rules show.

The UN Convention on Biological Diversity (CBD) is debating whether to impose a moratorium on the gene research next year and several southern countries fear a possible military application.

and

Gene-drive research has been pioneered by an Imperial College London professor, Andrea Crisanti, who confirmed he has been hired by Darpa on a $2.5m contract to identify and disable such drives.

Human augmentation has, at least at the beginning, a very limited number of very specific use cases. The supersoldier certainly is the top one.

Defeating cancer costs $500,000 

From Genetic Programmers Are the Next Startup Millionaires – MIT Technology Review

Cell Design Labs, founded by University of California, San Francisco, synthetic biologist Wendell Lim, creates “programs” to install inside T cells, the killer cells of the immune system, giving them new abilities.

Known as “CAR-T,” the treatments are both revolutionary and hugely expensive. A single dose is priced at around $500,000 but often results in a cure. Gilead quickly paid $12 billion to acquire Kite Pharma, maker of one of those treatments.

The initial T cell treatments, however, work only with blood cancers.

From FDA Approves Groundbreaking Gene Therapy for Cancer – MIT Technology Review

The FDA calls the treatment, made by Novartis, the “first gene therapy” in the U.S. The therapy is designed to treat an often-lethal type of blood and bone marrow cancer that affects children and young adults. Known as a CAR-T therapy, the approach has shown remarkable results in patients. The one-time treatment will cost $475,000, but Novartis says there will be no charge if a patient doesn’t respond to the therapy within a month.

The therapy, which will be marketed as Kymriah, is a customized treatment that uses a patient’s own T cells, a type of immune cell. A patient’s T cells are extracted and cryogenically frozen so that they can be transported to Novartis’s manufacturing center in New Jersey. There, the cells are genetically altered to have a new gene that codes for a protein—called a chimeric antigen receptor, or CAR. This protein directs the T cells to target and kill leukemia cells with a specific antigen on their surface. The genetically modified cells are then infused back into the patient.

This is less than the $700,000 previously reported, but still a fortune.

In Vivo Target Gene Activation via CRISPR/Cas9-Mediated Trans-epigenetic Modulation

From In Vivo Target Gene Activation via CRISPR/Cas9-Mediated Trans-epigenetic Modulation: Cell

Current genome-editing systems generally rely on inducing DNA double-strand breaks (DSBs). This may limit their utility in clinical therapies, as unwanted mutations caused by DSBs can have deleterious effects. CRISPR/Cas9 system has recently been repurposed to enable target gene activation, allowing regulation of endogenous gene expression without creating DSBs. However, in vivo implementation of this gain-of-function system has proven difficult. Here, we report a robust system for in vivo activation of endogenous target genes through trans-epigenetic remodeling. The system relies on recruitment of Cas9 and transcriptional activation complexes to target loci by modified single guide RNAs. As proof-of-concept, we used this technology to treat mouse models of diabetes, muscular dystrophy, and acute kidney disease. Results demonstrate that CRISPR/Cas9-mediated target gene activation can be achieved in vivo, leading to measurable phenotypes and amelioration of disease symptoms. This establishes new avenues for developing targeted epigenetic therapies against human diseases.

CRISPR can be repurposed to enable target gene activation

From Adapted Crispr gene editing tool could treat incurable diseases, say scientists | The Guardian

The technique is an adapted version of the powerful gene editing tool called Crispr. While the original version of Crispr snips DNA in precise locations to delete faulty genes or over-write flaws in the genetic code, the modified form “turns up the volume” on selected genes.

and

In the new version a Crispr-style guide is still used, but instead of cutting the genome at the site of interest, the Cas9 enzyme latches onto it. The new package also includes a third element: a molecule that homes in on the Cas9 and switches on whatever gene it is attached to.

and

The team showed that mice, with a version of muscular dystophy, a fatal muscle wasting disorder, recovered muscle growth and strength. The illness is caused by a mutation in the gene that produces dystrophin, a protein found in muscle fibres. However, rather than trying to replace this gene with a healthy version, the team boosted the activity of a second gene that produces a protein called utrophin that is very similar to dystrophin and can compensate for its absence.

Of course, once you can activate genes at will, you can also boost a perfectly healthy human in areas where he/she is weak or inept.

Genetic engineering for skill enablement, that is.

Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings

From [1607.06520] Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings

The blind application of machine learning runs the risk of amplifying biases present in data. Such a danger is facing us with word embedding, a popular framework to represent text data as vectors which has been used in many machine learning and natural language processing tasks. We show that even word embeddings trained on Google News articles exhibit female/male gender stereotypes to a disturbing extent. This raises concerns because their widespread use, as we describe, often tends to amplify these biases. Geometrically, gender bias is first shown to be captured by a direction in the word embedding. Second, gender neutral words are shown to be linearly separable from gender definition words in the word embedding. Using these properties, we provide a methodology for modifying an embedding to remove gender stereotypes, such as the association between between the words receptionist and female, while maintaining desired associations such as between the words queen and female. We define metrics to quantify both direct and indirect gender biases in embeddings, and develop algorithms to “debias” the embedding. Using crowd-worker evaluation as well as standard benchmarks, we empirically demonstrate that our algorithms significantly reduce gender bias in embeddings while preserving the its useful properties such as the ability to cluster related concepts and to solve analogy tasks. The resulting embeddings can be used in applications without amplifying gender bias.

Our machines can very easily recognise you among at least 2 billion people in a matter of seconds

From Doctor, border guard, policeman – artificial intelligence in China and its mind-boggling potential to do right, or wrong | South China Morning Post

Yitu’s Dragonfly Eye generic portrait platform already has 1.8 billion photographs to work with: those logged in the national database and you, if you have visited China recently. Yitu will not say whether Hong Kong identity card holders have been logged in the government’s database, for which the company provides navigation software and algor­ithms, but 320 million of the photos have come from China’s borders, including ports and airports, where pictures are taken of everyone who enters and leaves the country.

According to Yitu, its platform is also in service with more than 20 provincial public security departments, and is used as part of more than 150 municipal public security systems across the country, and Dragonfly Eye has already proved its worth. On its very first day of operation on the Shanghai Metro, in January, the system identified a wanted man when he entered a station. After matching his face against the database, Dragonfly Eye sent his photo to a policeman, who made an arrest. In the following three months, 567 suspected lawbreakers were caught on the city’s underground network.

Imagine this performed by a human eye augmented by AR lenses or glasses.

If you think that humans will confine this sort of applications to a computer at your desk or inside your pocket, you are delusional.

% Chinese researchers contribution to best 100 AI journals/conferences

The Eurasia Group and Sinovation Ventures released a report titled China embraces AI: A Close Look and A Long View with some interesting data.

The first bit is a chart that shows how the percentage of Chinese researchers contribution to best 100 AI journals/conferences raised from 23% / 25% (authoring/citations) in 2006 to almost 43% / 56% (authoring/citations) in 2015.

The second bit is a list of Chinese AI startups, divided into research/enabling technology/commercial application categories, which also highlights domestic and foreign investors.

With the massive commitment of the Chinese government, these numbers are bound to grow significantly.

Google open source tool DeepVariant achieves unprecedented accuracy in human genome sequencing

From Google Is Giving Away AI That Can Build Your Genome Sequence | Wired:

On Monday, Google released a tool called DeepVariant that uses deep learning—the machine learning technique that now dominates AI—to assemble full human genomes.

And now, engineers at Google Brain and Verily (Alphabet’s life sciences spin-off) have taught one to take raw sequencing data and line up the billions of As, Ts, Cs, and Gs that make you you.

and

Today, you can get your whole genome for just $1,000 (quite a steal compared to the $1.5 million it cost to sequence James Watson’s in 2008).

But the data produced by today’s machines still only produce incomplete, patchy, and glitch-riddled genomes. Errors can get introduced at each step of the process, and that makes it difficult for scientists to distinguish the natural mutations that make you you from random artifacts, especially in repetitive sections of a genome.

See, most modern sequencing technologies work by taking a sample of your DNA, chopping it up into millions of short snippets, and then using fluorescently-tagged nucleotides to produce reads—the list of As, Ts, Cs, and Gs that correspond to each snippet. Then those millions of reads have to be grouped into abutting sequences and aligned with a reference genome.

That’s the part that gives scientists so much trouble. Assembling those fragments into a usable approximation of the actual genome is still one of the biggest rate-limiting steps for genetics.

and

DeepVariant works by transforming the task of variant calling—figuring out which base pairs actually belong to you and not to an error or other processing artifact—into an image classification problem. It takes layers of data and turns them into channels, like the colors on your television set.

After the FDA contest they transitioned the model to TensorFlow, Google’s artificial intelligence engine, and continued tweaking its parameters by changing the three compressed data channels into seven raw data channels. That allowed them to reduce the error rate by a further 50 percent. In an independent analysis conducted this week by genomics computing platform, DNAnexus, DeepVariant vastly outperformed GATK, Freebayes, and Samtools, sometimes reducing errors by as much as 10-fold.

DeepVariant is now open source and available here: https://github.com/google/deepvariant

Google competes with many other vendors on many fronts. But while his competitors are focused on battling for today’s market opportunities, Google is busy in a solitary race to control the battlefield of the future: the human body.

The human body is the ultimate data center.

I always wondered how it would be if a superior species landed on earth and showed us how they play chess

From Google’s AlphaZero Destroys Stockfish In 100-Game Match – Chess.com

Stockfish, which for most top players is their go-to preparation tool, and which won the 2016 TCEC Championship and the 2017 Chess.com Computer Chess Championship, didn’t stand a chance. AlphaZero won the closed-door, 100-game match with 28 wins, 72 draws, and zero losses.

Oh, and it took AlphaZero only four hours to “learn” chess.

and

“We have always assumed that chess required too much empirical knowledge for a machine to play so well from scratch, with no human knowledge added at all,” Kasparov said. “Of course I’ll be fascinated to see what we can learn about chess from AlphaZero, since that is the great promise of machine learning in general—machines figuring out rules that humans cannot detect. But obviously the implications are wonderful far beyond chess and other games. The ability of a machine to replicate and surpass centuries of human knowledge in complex closed systems is a world-changing tool.”

The progress that DeepMind, and the industry in general, is making in artificial intelligence is breathtaking. Eventually, this feeling of confronting a superior species will become more and more frequent.

The notion of being, for the first time ever, the inferior species is terrifying for most humans. It implies that somebody else can do to us what we do to animals on daily basis. Homo Deus, Yuval Noah Harari new bestseller, drives you to that realization in an amazing way. I can’t recommend it enough.

Google AutoML generates its first outperforming AI child

From Google’s Artificial Intelligence Built an AI That Outperforms Any Made by Humans

In May 2017, researchers at Google Brain announced the creation of AutoML, an artificial intelligence (AI) that’s capable of generating its own AIs. More recently, they decided to present AutoML with its biggest challenge to date, and the AI that can build AI created a “child” that outperformed all of its human-made counterparts.

AutoML acts as a controller neural network that develops a child AI network for a specific task. For this particular child AI, which the researchers called NASNet, the task was recognizing objects — people, cars, traffic lights, handbags, backpacks, etc. — in a video in real-time.

and

NASNet was 82.7 percent accurate at predicting images on ImageNet’s validation set. This is 1.2 percent better than any previously published results, and the system is also 4 percent more efficient, with a 43.1 percent mean Average Precision (mAP)

and

The Google researchers acknowledge that NASNet could prove useful for a wide range of applications and have open-sourced the AI for inference on image classification and object detection. “We hope that the larger machine learning community will be able to build on these models to address multitudes of computer vision problems we have not yet imagined,” they wrote in their blog post.

Though the applications for NASNet and AutoML are plentiful, the creation of an AI that can build AI does raise some concerns. For instance, what’s to prevent the parent from passing down unwanted biases to its child? What if AutoML creates systems so fast that society can’t keep up?

We are waiting to develop a human-level artificial intelligence and see if it will improve itself to the point of becoming a superintelligence. Maybe it’s exceptionally close.

A deeper look into Kernel’s plan to create a brain prosthetic

From Inside the Race to Build a Brain-Machine Interface—and Outpace Evolution | WIRED

The scientists from Kernel are there for a different reason: They work for Bryan Johnson, a 40-year-old tech entrepreneur who sold his business for $800 million and decided to pursue an insanely ambitious dream—he wants to take control of evolution and create a better human. He intends to do this by building a “neuroprosthesis,” a device that will allow us to learn faster, remember more, “coevolve” with artificial intelligence, unlock the secrets of telepathy, and maybe even connect into group minds. He’d also like to find a way to download skills such as martial arts, Matrix-style. And he wants to sell this invention at mass-market prices so it’s not an elite product for the rich.

Right now all he has is an algorithm on a hard drive. When he describes the neuroprosthesis to reporters and conference audiences, he often uses the media-friendly expression “a chip in the brain,” but he knows he’ll never sell a mass-market product that depends on drilling holes in people’s skulls. Instead, the algorithm will eventually connect to the brain through some variation of noninvasive interfaces being developed by scientists around the world, from tiny sensors that could be injected into the brain to genetically engineered neurons that can exchange data wirelessly with a hatlike receiver. All of these proposed interfaces are either pipe dreams or years in the future, so in the meantime he’s using the wires attached to Dickerson’s hippo­campus to focus on an even bigger challenge: what you say to the brain once you’re connected to it.

That’s what the algorithm does. The wires embedded in Dickerson’s head will record the electrical signals that Dickerson’s neurons send to one another during a series of simple memory tests. The signals will then be uploaded onto a hard drive, where the algorithm will translate them into a digital code that can be analyzed and enhanced—or rewritten—with the goal of improving her memory. The algorithm will then translate the code back into electrical signals to be sent up into the brain. If it helps her spark a few images from the memories she was having when the data was gathered, the researchers will know the algorithm is working. Then they’ll try to do the same thing with memories that take place over a period of time, something nobody’s ever done before. If those two tests work, they’ll be on their way to deciphering the patterns and processes that create memories.

Although other scientists are using similar techniques on simpler problems, Johnson is the only person trying to make a commercial neurological product that would enhance memory. In a few minutes, he’s going to conduct his first human test. For a commercial memory prosthesis, it will be the first human test.

Long and detailed report on what Kernel is doing. Really worth your time.

Sangamo Therapeutics attempts to edit a gene inside the body for the first time

From AP Exclusive: US scientists try 1st gene editing in the body

.Scientists for the first time have tried editing a gene inside the body in a bold attempt to permanently change a person’s DNA to cure a disease

The experiment was done Monday in California on 44-year-old Brian Madeux. Through an IV, he received billions of copies of a corrective gene and a genetic tool to cut his DNA in a precise spot

and

Weekly IV doses of the missing enzyme can ease some symptoms, but cost $100,000 to $400,000 a year and don’t prevent brain damage.

Gene editing won’t fix damage he’s already suffered, but he hopes it will stop the need for weekly enzyme treatments.

and

The therapy has three parts: The new gene and two zinc finger proteins. DNA instructions for each part are placed in a virus that’s been altered to not cause infection but to ferry them into cells. Billions of copies of these are given through a vein.

They travel to the liver, where cells use the instructions to make the zinc fingers and prepare the corrective gene. The fingers cut the DNA, allowing the new gene to slip in. The new gene then directs the cell to make the enzyme the patient lacked.

Only 1 percent of liver cells would have to be corrected to successfully treat the disease, said Madeux’s physician and study leader, Dr. Paul Harmatz at the Oakland hospital.

Zinc finger nucleases is a different gene editing tool than CRISPR.

I originally wanted to wait the 3 months necessary to verify if this procedure worked, but this is history in the making, with enormous implications, and I want to have H+ to have it on the record.

I’ll update this article with the results of the therapy once they are disclosed.

It might be possible to treat diseases by giving aging tissues a signal to clean house

From Young Again: How One Cell Turns Back Time – The New York Times

None of us was made from scratch. Every human being develops from the fusion of two cells, an egg and a sperm, that are the descendants of other cells. The lineage of cells that joins one generation to the next — called the germline — is, in a sense, immortal.

Biologists have puzzled over the resilience of the germline for 130 years, but the phenomenon is still deeply mysterious.

Over time, a cell’s proteins become deformed and clump together. When cells divide, they pass that damage to their descendants. Over millions of years, the germline ought to become too devastated to produce healthy new life.

and

On Thursday in the journal Nature, Dr. Bohnert and Cynthia Kenyon, vice president for aging research at Calico, reported the discovery of one way in which the germline stays young.

Right before an egg is fertilized, it is swept clean of deformed proteins in a dramatic burst of housecleaning.

and

Combining these findings, the researchers worked out the chain of events by which the eggs rejuvenate themselves.

It begins with a chemical signal released by the sperm, which triggers drastic changes in the egg. The protein clumps within the egg “start to dance around,” said Dr. Bohnert.

The clumps come into contact with little bubbles called lysosomes, which extend fingerlike projections that pull the clumps inside. The sperm signal causes the lysosomes to become acidic. That change switches on the enzymes inside the lysosomes, allowing them to swiftly shred the clumps.

We are entering a cycle where humans and algorithms are adapting to each other

From Exploring Cooperation with Social Machines:

Humans are filling in the gaps where algorithms cannot easily function, and algorithms are calculating and processing complex information at a speed that for most humans is not possible. Together, humans and computers are sorting out which is going to do what type of task. It is a slow and tedious process that emulates a kind of sociability between entities in order to form cooperative outcomes.

Either one or both parties must yield a bit for cooperation to work, and if a program is developed in a rigid way, the yielding is usually done by the human to varying degrees of frustration as agency (our ability to make choices from a range of options) becomes constrained by the process of automation.

Indeed, sociability and social relationships depend on the assumption of agency on the part of the other, human or machine. Humans often attribute agency to machines in their assumptions underlying how the machine will satisfy their present need, or indeed inhibit them from satisfying a need.

You should also read Implementing Algorithms In The Form Of Scripts Has Been An Early Step In Training Humans To Be More Like Machines

Implementing algorithms in the form of scripts has been an early step in training humans to be more like machines

From Cooperating with Algorithms in the Workplace:

Thus, concerning algorithms at work, people are either replaced by them, required to help them, or have become them. Workplace algorithms have been evolving for some time in the form of scripts and processes that employers have put in place for efficiency, “quality control,” brand consistency, product consistency, experience consistency and most particularly, cost savings. As a result phone calls to services such as hotels, shops and restaurants, may now have a script read out loud or memorized by the employee to the customer to ensure consistent experiences and task compliance.

Consistency of experience is increasingly a goal within organizations, and implementing algorithms in the form of scripts and processes has been an early step in training humans to be more like machines. Unfortunately, these algorithms can result in an inability to cooperate in contexts not addressed by the algorithm. These scripts and corresponding processes purposely greatly restrict human agency by failing to define clear boundaries for the domain of the algorithm and recognizing the need for adaptation outside these boundaries.

Thus, often if a worker is asked a specialized or specific query, they lack the ability to respond to it and will either turn away the customer, or accelerate the query up (and down) a supervisory management chain, with each link bound by its own scripts, processes and rules, which may result in a non-answer or non-resolution for the customer.

Not only the paper is mighty interesting, but the whole body of research it belongs too is worth serious investigation.

Nutritional Ketosis Alters Fuel Preference and Thereby Endurance Performance in Athletes

From http://www.cell.com/cell-metabolism/pdfExtended/S1550-4131(16)30355-2:

Ketosis, the metabolic response to energy crisis, is a mechanism to sustain life by altering oxidative fuel selection. Often overlooked for its metabolic potential, ketosis is poorly understood outside of starvation or diabetic crisis. Thus, we studied the biochemical advantages of ketosis in humans using a ketone ester-based form of nutrition without the unwanted milieu of endogenous ketone body production by caloric or carbohydrate restriction.

In five separate studies of 39 high-performance athletes, we show how this unique metabolic state improves physical endurance by altering fuel competition for oxidative respiration. Ketosis decreased muscle glycolysis and plasma lactate concentrations, while providing an alternative substrate for oxidative phosphorylation. Ketosis increased intramuscular triacylglycerol oxidation during exercise, even in the presence of normal muscle glycogen, co-ingested carbohydrate and elevated insulin. These findings may hold clues to greater human potential and a better understanding of fuel metabolism in health and disease.

A drink made of pure Ketone could boost the body more than carbs, fat and protein

From Scientists think they’ve discovered a fourth type of fuel for humans — beyond carbs, fat, and protein | The Independent

To make the product, HVMN leveraged more than a decade and $60 million worth of scientific research through an exclusive partnership with Oxford University.

Most of the food we eat contains carbs. The carbs in fruit come from naturally occurring sugars; those in potatoes, veggies, and pasta come from starch. They’re all ultimately broken down into sugar, or glucose, for energy.

When robbed of carbs, the body turns to fat for fuel.

In the process of digging into its fat stores, the body releases molecules called ketones. A high-fat, low-carb diet (also known as a ketogenic diet) is a shortcut to the same goal.

Instead of going without food, someone on the keto diet tricks the body into believing it is starving by snatching away carbohydrates, its primary source of fuel.

This is why as long as you’re not eating carbs, you can ramp up your intake of fatty foods like butter, steak, and cheese and still lose weight. The body becomes a fat-melting machine, churning out ketones to keep running.

If you could ingest those ketones directly, rather than starving yourself or turning to a keto diet, you could essentially get a superpower.

That performance boost is “unlike anything we’ve ever seen before,” said Kieran Clarke, a professor of physiological biochemistry at Oxford who’s leading the charge to translate her work on ketones and human performance into HVMN’s Ketone.

Neurable has been working on developing brain-control systems for VR for over a year

From Brain-Controlled Typing May Be the Killer Advance That AR Needs – MIT Technology Review

The current speed record for typing via brain-computer interface is eight words per minute, but that uses an invasive implant to read signals from a person’s brain. “We’re working to beat that record, even though we’re using a noninvasive technology,” explains Alcaide. “We’re getting about one letter per second, which is still fairly slow, because it’s an early build. We think that in the next year we can further push that forward.”

He says that by introducing AI into the system, Neurable should be able to reduce the delay between letters and also predict what a user is trying to type.

This would have applications well beyond VR.

We don’t understand yet the brain coding for force

From For Brain-Computer Interfaces to Be Useful, They’ll Need to Be Wireless – MIT Technology Review

Today’s brain-computer interfaces involve electrodes or chips that are placed in or on the brain and communicate with an external computer. These electrodes collect brain signals and then send them to the computer, where special software analyzes them and translates them into commands. These commands are relayed to a machine, like a robotic arm, that carries out the desired action.

The embedded chips, which are about the size of a pea, attach to so-called pedestals that sit on top of the patient’s head and connect to a computer via a cable. The robotic limb also attaches to the computer. This clunky set-up means patients can’t yet use these interfaces in their homes.

In order to get there, Schwartz said, researchers need to size down the computer so it’s portable, build a robotic arm that can attach to a wheelchair, and make the entire interface wireless so that the heavy pedestals can be removed from a person’s head.

The above quote is interesting, especially because the research is ready to be tested but there’s no funding. However, the real value is in the video embedded in the page, where Andrew Schwartz, distinguished professor of neurobiology at the University of Pittsburgh, explains what’s the research frontier for neural interfaces.

AR glasses competition starts to get real

From Daqri ships augmented reality smart glasses for professionals | VentureBeat

At $4,995, the system is not cheap, but it is optimized to present complex workloads and process a lot of data right on the glasses themselves.

and

The Daqri is powered by a Visual Operating System (VOS) and weighs 0.7 pounds. The glasses have a 44-degree field of view and use an Intel Core m7 processor running at 3.1 gigahertz. They run at 90 frames per second and have a resolution of 1360 x 768. They also connect via Bluetooth or Wi-Fi and have sensors such as a wide-angle tracking camera, a depth-sensing camera, and an HD color camera for taking photos and videos.

Olympus just presented a competing product for $1500.

Olympus EyeTrek is a $1,500 open-source, enterprise-focused smart glasses product

From Olympus made $1,500 open-source smart glasses – The Verge

The El-10 can be mounted on all sorts of glasses, from regular to the protective working kind. It has a tiny 640 x 400 OLED display that, much like Google Glass, sits semi-transparently in the corner of your vision when you wear the product on your face. A small forward-facing camera can capture photos and videos, or even beam footage back to a supervisor in real time. The El-10 runs Android 4.2.2 Jelly Bean and comes with only a bare-bones operating system, as Olympus is pushing the ability to customize it

It’s really cool that it can be mounted on any pair of glasses. Olympus provides clips of various sizes to adjust to multiple frames. It weights 66g.

The manual mentions multiple built-in apps: image and video players, a camera (1280x720px), a video recorder (20fps, up to 30min recording), and the QR scanner. It connects to other things via Bluetooth or wireless network.

You can download the Software Development Kit here.
It includes a Windows program to develop new apps, an Android USB driver, an Android app to generate QR codes, and a couple of sample apps.

What is consciousness, and could machines have it?

From What is consciousness, and could machines have it? | Science

The controversial question of whether machines may ever be conscious must be based on a careful consideration of how consciousness arises in the only physical system that undoubtedly possesses it: the human brain. We suggest that the word “consciousness” conflates two different types of information-processing computations in the brain: the selection of information for global broadcasting, thus making it flexibly available for computation and report (C1, consciousness in the first sense), and the self-monitoring of those computations, leading to a subjective sense of certainty or error (C2, consciousness in the second sense). We argue that despite their recent successes, current machines are still mostly implementing computations that reflect unconscious processing (C0) in the human brain. We review the psychological and neural science of unconscious (C0) and conscious computations (C1 and C2) and outline how they may inspire novel machine architectures.

Robotic exoskeletons may revolutionize the heavy industry

From Using this robot gives you monstrously powerful mech arms – The Verge

The Guardian GT looks immense, but its real selling points is its dexterity. Two sensitive controllers are used to guide the huge robot arms, which follow the operators’ motions precisely. To get a closer look at the action, video feed from a camera mounted on top of the Guardian GT is sent to a headset worn by the operator. And the controllers also include force feedback, so the controller gets an idea of how much weight the robot is moving. Each arm can pick up 500 lbs independently.

and

The Guardian GT’s control system allows it to take on delicate tasks, like pushing buttons and flipping switches. The video feed also means it can be used remotely. Combined, these attributes make the robot perfectly suited for dangerous jobs like cleaning out nuclear power plants. An onboard power source also means it can be operated without a tether, roaming independently for hours a time.

Sarcos is building a truly impressive series of robotic exoskeleton suits, not just the GT. You should also look at the Guardian XO on their website where there are better videos of all products than the one embedded in the above article.

Sarcos says that their technology is the future of heavy industry in a wide range of scenarios:

  • nuclear reactor inspection and maintenance
  • petroleum
  • construction
  • heavy equipment manufacturing
  • palletizing and de-palletizing
  • loading and unloading supplies
  • shipboard and in-field logistics
  • erecting temporary shelters
  • equipment repairs
  • medical evacuation
  • moving rocks and debris in humanitarian missions

but I think this is just the beginning. Thanks to technological progress, their exoskeletons could become thinner and thinner, lighter and lighter, and be used in other fields too (including war combat).

They are even attempting to establish a robot-as-a-service model.

An implant to control the movements of a bionic hand and to communicate with his wife

From When man meets metal: rise of the transhumans | Technology | The Guardian

One of the inspirations for Vintiner’s journey into this culture was Professor Kevin Warwick, deputy vice-chancellor at Coventry University, who back in 1998 was the first person to put a silicon chip transponder under his skin (that enabled him to open doors and switch on lights automatically as he moved about his department) and to declare himself “cyborg”. Four years later Warwick pioneered a “Braingate” implant, which involved hundreds of electrodes tapping into his nervous system and transferring signals across the internet, first to control the movements of a bionic hand, and then to connect directly and “communicate” with his wife, who had a Braingate of her own.

In some ways Warwick’s work seemed to set the parameters of the bodyhacking experience: full of ambition, somewhat risky, mostly outlawed. The Braingate system is now being explored in America to help some patients suffering paralysis, but Warwick’s DIY work has not been widely taken up by either mainstream medicine, academia or commercial tech companies. He and his wife remain the only couple to have communicated “nervous system to nervous system” through pulses that it took six weeks for their brains to “hear”.

While this segment is the most interesting, the whole article is a long and fascinating journey into the biohacking counter-culture.

Gene therapy is “underrated” as a way to conquer old age

From One Man’s Quest to Hack His Own Genes – MIT Technology Review

The gene Hanley added to his muscle cells would make his body produce more of a potent hormone—potentially increasing his strength, stamina, and life span.

and

Hanley opted instead for a simpler method called electroporation. In this procedure, circular rings of DNA, called plasmids, are passed into cells using an electrical current. Once inside, they don’t become a permanent part of person’s chromosomes. Instead, they float inside the nucleus. And if a gene is coded into the plasmid, it will start to manufacture proteins. The effect of plasmids is temporary, lasting weeks to a few months.

and

Hanley says he designed a plasmid containing the human GHRH [growth-hormone-releasing hormone] gene on his computer, with the idea of developing it as a treatment for AIDS patients. But no investors wanted to back the plan. He concluded that the way forward was to nominate himself as lab rat. Soon he located a scientific supply company that manufactured the DNA rings for him at a cost of about $10,000. He showed me two vials of the stuff he’d brought along in a thermos, each containing a few drops of water thickened by a half-milligram of DNA.

and

Hanley skipped some steps that most companies developing a drug would consider essential. In addition to proceeding without FDA approval, he never tested his plasmid in any animals. He did win clearance for the study from the Institute of Regenerative and Cellular Medicine in Santa Monica, California, a private “institutional review board,” or IRB, that furnishes ethics oversight of human experiments.

and

Hanley had opted to take six milligrams of the tranquilizer Xanax and got local anesthetic in his thighs. The doctor can be seen placing a plexiglass jig built by Hanley onto the biologist’s thigh. The doctor leans in with a hypodermic needle to inject the sticky solution of GHRH plasmids into the designated spot. He also uses the jig to guide the two electrodes, stiff sharp needles the size of fork tines, into the flesh. The electrodes—one positive, one negative—create a circuit, a little like jump-starting your car.

Highly controversial, and borderline legal, as you’d expect in any hacking activity, especially hacking the human body.

Hanley published his version of the story on the Institute for Ethics and Emerging Technologies, calling the above article a “gross misrepresentation”.

The first detailed neuroimaging investigation in patients with bionic limbs

From Observing How the Brain Learns to Control a Bionic Limb | Technology Networks

Targeted motor and sensory reinnervation (TMSR) is a surgical procedure on patients with amputations that reroutes residual limb nerves towards intact muscles and skin in order to fit them with a limb prosthesis allowing unprecedented control. By its nature, TMSR changes the way the brain processes motor control and somatosensory input; however the detailed brain mechanisms have never been investigated before and the success of TMSR prostheses will depend on our ability to understand the ways the brain re-maps these pathways.

and

a patient fitted with a TMSR prosthetic “sends” motor commands to the re-innervated muscles, where his or her movement intentions are decoded and sent to the prosthetic limb. On the other hand, direct stimulation of the skin over the re-innervated muscles is sent back to the brain, inducing touch perception on the missing limb.

Upper limb cortical maps in amputees with targeted muscle and sensory reinnervation

From Upper limb cortical maps in amputees with targeted muscle and sensory reinnervation | Brain

Neuroprosthetics research in amputee patients aims at developing new prostheses that move and feel like real limbs. Targeted muscle and sensory reinnervation (TMSR) is such an approach and consists of rerouting motor and sensory nerves from the residual limb towards intact muscles and skin regions. Movement of the myoelectric prosthesis is enabled via decoded electromyography activity from reinnervated muscles and touch sensation on the missing limb is enabled by stimulation of the reinnervated skin areas. Here we ask whether and how motor control and redirected somatosensory stimulation provided via TMSR affected the maps of the upper limb in primary motor (M1) and primary somatosensory (S1) cortex, as well as their functional connections.

Functional connectivity in TMSR patients between upper limb maps in M1 and S1 was comparable with healthy controls, while being reduced in non-TMSR patients. However, connectivity was reduced between S1 and fronto-parietal regions, in both the TMSR and non-TMSR patients with respect to healthy controls. This was associated with the absence of a well-established multisensory effect (visual enhancement of touch) in TMSR patients. Collectively, these results show how M1 and S1 process signals related to movement and touch are enabled by targeted muscle and sensory reinnervation. Moreover, they suggest that TMSR may counteract maladaptive cortical plasticity typically found after limb loss, in M1, partially in S1, and in their mutual connectivity. The lack of multisensory interaction in the present data suggests that further engineering advances are necessary (e.g. the integration of somatosensory feedback into current prostheses) to enable prostheses that move and feel as real limbs.

We no longer know if we’re seeing the same information or what anybody else is seeing

From Zeynep Tufekci: We’re building a dystopia just to make people click on ads | TED.com

As a public and as citizens, we no longer know if we’re seeing the same information or what anybody else is seeing, and without a common basis of information, little by little, public debate is becoming impossible, and we’re just at the beginning stages of this.

and

What if the system that we do not understand was picking up that it’s easier to sell Vegas tickets to people who are bipolar and about to enter the manic phase. Such people tend to become overspenders, compulsive gamblers. They could do this, and you’d have no clue that’s what they were picking up on. I gave this example to a bunch of computer scientists once and afterwards, one of them came up to me. He was troubled and he said, “That’s why I couldn’t publish it.” I was like, “Couldn’t publish what?” He had tried to see whether you can indeed figure out the onset of mania from social media posts before clinical symptoms, and it had worked, and it had worked very well, and he had no idea how it worked or what it was picking up on.

and

Now, don’t get me wrong, we use digital platforms because they provide us with great value. I use Facebook to keep in touch with friends and family around the world. I’ve written about how crucial social media is for social movements. I have studied how these technologies can be used to circumvent censorship around the world. But it’s not that the people who run, you know, Facebook or Google are maliciously and deliberately trying to make the country or the world more polarized and encourage extremism. I read the many well-intentioned statements that these people put out. But it’s not the intent or the statements people in technology make that matter, it’s the structures and business models they’re building. And that’s the core of the problem. Either Facebook is a giant con of half a trillion dollars and ads don’t work on the site, it doesn’t work as a persuasion architecture, or its power of influence is of great concern. It’s either one or the other. It’s similar for Google, too.

Longer than usual (23 min) TED talk, but worth it.

I, too, believe that there’s no malicious intent behind the increasingly capable AI we see these days. Quite the opposite, I believe that most people working at Google or Facebook are there to make a positive impact, to change the world for the better. The problem is, on top of the business model, the fact that a lot of people, even the most brilliant ones, don’t take the time to ponder the long-term consequences of the things they are building in the way they are building them today.

There’s no scientific proof that today’s drugs can boost intelligence

From The Neuroscience of Intelligence:

The Internet has countless entries for IQ-boosting drugs, and there are many peer-reviewed studies of cognitive enhancing effects on learning, memory, and attention for drugs like nicotine (Heishman et al., 2010). Psychostimulant drugs used to treat attention deficit hyperactivity disorder (ADHD) and other clinical disorders of the brain are particularly favorite candidates for use by students in high school, college, and university and by adults without clinical conditions who desire cognitive enhancement for academic or vocational achievement. Many surveys show that drugs already are widely used to enhance aspects of cognition and a number of surrounding ethical issues have been discussed.

Overall, well-designed research studies do not strongly support such use (Bagot & Kaminer, 2014; Farah et al., 2014; Husain & Mehta, 2011; Ilieva & Farah, 2013; Smith & Farah, 2011). Even fewer studies are designed specifically to investigate drug effects directly on intelligence test scores in samples of people who do not have clinical problems. I could find no relevant meta-analysis that might support such use. In short, there is no compelling scientific evidence yet for an IQ pill.

As we learn more about brain mechanisms and intelligence, however, there is every reason to believe that it will be possible to enhance the relevant brain mechanisms with drugs, perhaps existing ones or new ones. Research on treating Alzheimer’s disease, for example, may reveal specific brain mechanisms related to learning and memory that can be enhanced with new drugs significantly better than existing drugs. This prospect fuels intense research at many multinational pharmaceutical companies. If such drugs become available to enhance learning and memory in patients with Alzheimer’s disease, surely the effect of those drugs will be studied in non-patients to boost cognition.

Biohacking is a broad term. Among the others, it can be associated with technologies and methods to boost intelligence.

Haier is one of the most prominent scientists studying intelligence and his book is a phenomenal history lesson on what has been researched in the last 40 years. There are innovative techniques being tried these days, including magnetic fields, electric shocks, and cold lasers to influence the cognitive processes. Some of them may work. Today’s drugs to boost intelligence don’t. There’s no scientific evidence of it.

The fashion industry will have to embrace smart clothing end to end

As I observe the emergence of smart clothing in multiple categories (from smart socks to smart jackets), I am trying to imagine the implications for the buyer as more and more pieces of his/her wardrobe blend with technology.

Today smart clothing is mainly perceived as a nice-to-have by tech enthusiasts (both men and women), and as a gimmick by the larger mainstream audience. In the future, as the technology matures and starts providing significant benefits, smart clothing might become preferred rather than optional. What happens at that point?

Will the buyer continue to mix and match smart clothing pieces from different fashion brands as he/she does today with traditional clothing? Will he /she accept to deal with each app that comes with each garment? Socks, jackets, bras, gloves, pants, etc. Or there will be a company that centralizes the ecosystem around its technology hub, in the same way Apple is centralizing the smart home ecosystem around its HomeKit? Just one app to monitor all garments and understand our health status, mood, performance.

Apple’s Angela Ahrendts comes from Burberry. At the time, the consensus was that she was hired to drive the sales of upper scale products like the premium Apple Watch Edition. Maybe there’s a longer-term reason?

What if technology becomes a primary driver for fashion purchasing decisions and such centralizing company doesn’t emerge to save customers?
What if the buyer really cares about the technology benefits of smart clothing but doesn’t like the style or the colour of the few brands that offer the specific garment he/she wants?

I think that eventually some fashion brands will have to embrace smart clothing end to end, offering an entire collection of smart clothes. Not just to differentiate. But to retain the customer loyalty, in the same way most collections today include all the trendiest pieces. And at that point, controlling a whole collection of smart clothes will be an opportunity to innovate, to make customers feel better about their inner self, not just their external appearance.

In the IT industry, today we say that every company is becoming a tech company. Tomorrow it might well be that every fashion brand becomes a tech brand.

Sensoria smart socks, augmented athletes, and the future of sport

From Smart sports apparel startup Sensoria gets into healthcare, forms new company with top provider – GeekWire

Founded in 2011 by Vigano and his former Microsoft colleagues, Sensoria has developed an array of “smart” garments that can track your movements and measure how well you’re walking or running. The company offers an artificial intelligence-powered real-time personal trainer; it partnered with Microsoft last year to develop “smart soccer boots”; and it also partnered with Renault last year to build a smart racing suit for professional racecar drivers.

I recently met at the Microsoft Ignite conference in Orlando an old friend of mine working at this company. He showed me the smart sock. Here’s how it works:

      1. Each smart sock is infused with three proprietary textile sensors under the plantar area (bottom of the foot) to detect foot pressure.
      2. The conductive fibers relay data collected by the sensors to the anklet. The sock has been designed to function as a textile circuit board.
      3. Each sock features magnetic contact points below the cuff so you can easily connect your anklet to activate the textile sensors.

When I saw the product in person, I selfishly suggested a smart elbow brace for tennis players as I play squash.

There are a lot of applications for smart textiles beyond socks for sport, and in fact, the company is entering the healthcare market too, but ever since meeting my friend, I wondered about the future of sports.

Today athletes are forbidden from augmenting their bodies through chemicals. But what if tomorrow the appeal of sport becomes how much technology can push a human body?

A new genetic engineering method based on CRISPR: Base Editing 

From CRISPR 2.0 Is Here, and It’s Way More Precise – MIT Technology Review

The human genome contains six billion DNA letters, or chemical bases known as A, C, G and T. These letters pair off—A with T and C with G—to form DNA’s double helix. Base editing, which uses a modified version of CRISPR, is able to change a single one of these letters at a time without making breaks to DNA’s structure.

That’s useful because sometimes just one base pair in a long strand of DNA gets swapped, deleted, or inserted—a phenomenon called a point mutation. Point mutations make up 32,000 of the 50,000 changes in the human genome known to be associated with diseases.

In the Nature study, researchers led by David Liu, a Harvard chemistry professor and member of the Broad Institute, were able to change an A into a G. Such a change would address about half the 32,000 known point mutations that cause disease.

To do it, they modified CRISPR so that it would target just a single base. The editing tool was able to rearrange the atoms in an A so that it instead resembled a G, tricking cells into fixing the other DNA strand to complete the switch. As a result, an A-T base pair became a G-C one. The technique essentially rewrites errors in the genetic code instead of cutting and replacing whole chunks of DNA.

The new method is also called ABE (adenine base editors).

From New Gene-Editing “Pencil” Erases Disease-Causing Errors – Scientific American

before ABE can be tried in human patients, Liu says, doctors would need to determine when to intervene in the course of a genetic disease. They would also have to figure out how to best deliver the gene editor to the relevant cells—and to prove the approach is safe and effective enough to make a difference for the patient.

and

The ABE gene-editing process is efficient, effectively editing the relevant spot in the genome an average of 53 percent of the time across 17 tested sites, Liu said. It caused undesired effects less than 0.1 percent of the time, he added. That success rate is comparable with what CRISPR can do when it is cutting genes.

It’s such an incredible moment to work (and invest) in life sciences.

The minimum dataset scale for deep learning

From Google Brain chief: Deep learning takes at least 100,000 examples | VentureBeat

“I would say pretty much any business that has tens or hundreds of thousands of customer interactions has enough scale to start thinking about using these sorts of things,” Jeff Dean, a senior fellow at Google, said in an onstage interview at the VB Summit in Berkeley, California. “If you only have 10 examples of something, it’s going to be hard to make deep learning work. If you have 100,000 things you care about, records or whatever, that’s the kind of scale where you should really start thinking about these kinds of techniques.”

The dangerous rush to build AI expertise

From Lyft’s biggest AI challenge is getting engineers up to speed | VentureBeat

Machine learning and deep learning AI have gone from the niche realm of PhDs to tools that will be used throughout all types of companies. That equates to a big skills gap, says Gil Arditi, product lead for Lyft’s Machine Learning Platform.

and

Today, of course, any engineer with a modicum of experience can spin up databases on user-friendly cloud services. That’s the path that AI processes have to travel, he says. Luckily, machine learning is making AI more accessible to newbies without a PhD in statistics, mathematics, or computer science.

“Part of the promise of machine learning in general but deep learning in particular … is that there actually is not a lot of statistical modeling,” said Arditi. “Instead of giving to the machines exact formulas that will address the problem, you just give it the tools and treat it like a black box.”

From LinkedIn plans to teach all its engineers the basics of using AI | VentureBeat

Today, of course, any engineer with a modicum of experience can spin up databases on user-friendly cloud services. That’s the path that AI processes have to travel, he says. Luckily, machine learning is making AI more accessible to newbies without a PhD in statistics, mathematics, or computer science.

“Part of the promise of machine learning in general but deep learning in particular … is that there actually is not a lot of statistical modeling,” said Arditi. “Instead of giving to the machines exact formulas that will address the problem, you just give it the tools and treat it like a black box.”

and

The academy isn’t designed to give engineers an academic grounding in machine learning as a discipline. It’s designed instead to prepare them for using AI in much the same way that they’d use a system like QuickSort, an algorithm for sorting data that’s fed into it. Users don’t have to understand how the underlying system works, they just need to know the right way to implement it.

That’s the goal for LinkedIn, Agarwal said. Thus far, six engineers have made it through the AI academy and are deploying machine learning models in production as a result of what they learned. The educational program still has a ways to go (Agarwal said he’d grade it about a “C+ at the moment) but it has the potential to drastically affect LinkedIn’s business.

From Tech Giants Are Paying Huge Salaries for Scarce A.I. Talent – The New York Times

Typical A.I. specialists, including both Ph.D.s fresh out of school and people with less education and just a few years of experience, can be paid from $300,000 to $500,000 a year or more in salary and company stock, according to nine people who work for major tech companies or have entertained job offers from them. All of them requested anonymity because they did not want to damage their professional prospects.

Well-known names in the A.I. field have received compensation in salary and shares in a company’s stock that total single- or double-digit millions over a four- or five-year period. And at some point they renew or negotiate a new contract, much like a professional athlete

and

Most of all, there is a shortage of talent, and the big companies are trying to land as much of it as they can. Solving tough A.I. problems is not like building the flavor-of-the-month smartphone app. In the entire world, fewer than 10,000 people have the skills necessary to tackle serious artificial intelligence research, according to Element AI, an independent lab in Montreal.

Two thoughts:

  • This is unprecedented in the last two decades. Not even the raise of virtualization or cloud computing triggered such a massive call to action.
  • Do you really think that all these education programs and all these rushed experts will spend any significant time on the ethical aspects of AI and long-term implications of algorithmic bias?

NATO calls for a specialists meeting about artificial intelligence in mid 2018

From NATO urged to rapidly absorb AI into its command and control | Jane’s 360

NATO advisers and industry are urging the allies to rapidly absorb artificial intelligence software into their militaries’ observe, orient, decide, act (OODA) loop or risk seeing the latter collapse in the face of adversaries’ increasingly sophisticated artificial intelligence (AI)-enabled attacks.

NATO Information Systems Technology (IST) Panel Office already arranged a 150 people meeting in Bordeaux for end of May 2018:

In order to avoid an abstract scientific discussion, the national STB representatives will engage operational experts to participate and work with the scientists towards a common road map for future research activities in NATO that meet operational needs.

Within the OODA loop the first step ‘Observe’ is about harvesting data. Intelligent integration of heterogeneous devices, architectures of acquisition systems and sensors, decentralized management of data, and autonomous collection platforms and sensors give a huge field for improvement with Natural Language Processing and Artificial Intelligence technologies for acquiring and processing Big Data. The next step ‘Orient’ is about reasoning. Analysis of social media, information fusion, anomaly detection, and behavior modeling are domains with huge potential for Machine Learning algorithms. The same is applicable for the ‘Decide’ step where predictive analytics, augmented and virtual reality and many more technologies support the operational decision-making process. A complex battlefield and high speed operations require independently acting devices to ‘Act’ with a certain degree of Autonomy. In all steps, the application of AI technologies for automated analysis, early warnings, guaranteeing trust in the Internet of Things (IoT), and distinguishing relevant from Fake Data is mandatory.

This is the escalation Nick Bostrom first (in its book Superintelligence) and Elon Musk later were talking about.

AVA: A Video Dataset of Spatio-temporally Localized Atomic Visual Actions

From [1705.08421] AVA: A Video Dataset of Spatio-temporally Localized Atomic Visual Actions

This paper introduces a video dataset of spatio-temporally localized Atomic Visual Actions (AVA). The AVA dataset densely annotates 80 atomic visual actions in 57.6k movie clips with actions localized in space and time, resulting in 210k action labels with multiple labels per human occurring frequently. The main differences with existing video datasets are: the definition of atomic visual actions, which avoids collecting data for each and every complex action; precise spatio-temporal annotations with possibly multiple annotations for each human; the use of diverse, realistic video material (movies). This departs from existing datasets for spatio-temporal action recognition, such as JHMDB and UCF datasets, which provide annotations for at most 24 composite actions, such as basketball dunk, captured in specific environments, i.e., basketball court.
We implement a state-of-the-art approach for action localization. Despite this, the performance on our dataset remains low and underscores the need for developing new approaches for video understanding. The AVA dataset is the first step in this direction, and enables the measurement of performance and progress in realistic scenarios.

Google confirms it’s using YouTube to teach AI about human actions

From Google built a dataset to teach its artificial intelligence how humans hug, cook, and fight — Quartz

Google, which owns YouTube, announced on Oct. 19 a new dataset of film clips, designed to teach machines how humans move in the world. Called AVA, or “atomic visual actions,” the videos aren’t anything special to human eyes—they’re three second clips of people drinking water and cooking curated from YouTube. But each clip is bundled with a file that outlines the person that a machine learning algorithm should watch, as well as a description of their pose, and whether they’re interacting with another human or object. It’s the digital version of pointing at a dog with a child and coaching them by saying, “dog.”

and

This technology could help Google to analyze the years of video it processes on YouTube every day. It could be applied to better target advertising based on whether you’re watching a video of people talk or fight, or in content moderation. The eventual goal is to teach computers social visual intelligence, the authors write in an accompanying research paper, which means “understanding what humans are doing, what might they do next, and what they are trying to achieve.”

Google’s video dataset is free.

In 2015, I speculated on Twitter:

I wonder if @google already has enough @youtube videos to create a video version of Wikipedia (and if they already are machine learning it)

Bionic Prosthetics 101

From Mind-controlled bionic limbs are shaping the future of prosthetics – The National Student

Using a “biological amplifier” the muscle signals were amplified thousandfold by shifting the major nerves that normally went down the arm and letting them grow into the chest instead. When you think of closing your hand, a chest section will contract and electrodes will pick up those signals to tell the prosthetic arm to move.

The brain exchanges information through neural circuits, which have receptors to sense a stimulus, report this back to the nervous system and produce an appropriate response via motor neurons which lead to movement.
A touch on the chest would actually lead to the sensation of a touch on the patient’s phantom arm, even his missing fingers. Senses of hot, cold, as well as sharpness and dullness were all felt and this provided a way to restore sensation using a prosthetic hand “that feels”.

A small microcomputer sits on the patient’s back connected to the prosthetic which is trained by the patient’s mind to move in specific directions and perform different tasks.

If you are new to bionic prosthetic technologies, this is a great introductory article about all recent approaches.

We want the UAE to become the world’s most prepared country for artificial intelligence 

From Mohammad Bin Rashid reveals reshuffled UAE Cabinet | Gulfnews.com

The new government – the 13th in the UAE’s history – sees the appointment of Omar Bin Sultan Al Olama (right), 27, as the State Minister for Artificial Intelligence.

“We want the UAE to become the world’s most prepared country for artificial intelligence,” Shaikh Mohammad said.

Shaikh Mohammad added the new phase focuses on “future skills, future sciences and future technology, as we prepare for the centenary to ensure a better future for our generations”.

After Russia and China, the United Arab Emirates wants to make clear, too, that AI is a strategic advantage and a top priority.

Mastering the game of Go without human knowledge

From Mastering the game of Go without human knowledge : Nature

A long-standing goal of artificial intelligence is an algorithm that learns, tabula rasa, superhuman proficiency in challenging domains. Recently, AlphaGo became the first program to defeat a world champion in the game of Go. The tree search in AlphaGo evaluated positions and selected moves using deep neural networks. These neural networks were trained by supervised learning from human expert moves, and by reinforcement learning from self-play. Here we introduce an algorithm based solely on reinforcement learning, without human data, guidance or domain knowledge beyond game rules. AlphaGo becomes its own teacher: a neural network is trained to predict AlphaGo’s own move selections and also the winner of AlphaGo’s games. This neural network improves the strength of the tree search, resulting in higher quality move selection and stronger self-play in the next iteration. Starting tabula rasa, our new program AlphaGo Zero achieved superhuman performance, winning 100–0 against the previously published, champion-defeating AlphaGo.

Accumulating thousands of years of human knowledge during a period of just a few days

From AlphaGo Zero: Learning from scratch | DeepMind

It is able to do this by using a novel form of reinforcement learning, in which AlphaGo Zero becomes its own teacher. The system starts off with a neural network that knows nothing about the game of Go. It then plays games against itself, by combining this neural network with a powerful search algorithm. As it plays, the neural network is tuned and updated to predict moves, as well as the eventual winner of the games.

This updated neural network is then recombined with the search algorithm to create a new, stronger version of AlphaGo Zero, and the process begins again. In each iteration, the performance of the system improves by a small amount, and the quality of the self-play games increases, leading to more and more accurate neural networks and ever stronger versions of AlphaGo Zero.

and

After just three days of self-play training, AlphaGo Zero emphatically defeated the previously published version of AlphaGo – which had itself defeated 18-time world champion Lee Sedol – by 100 games to 0. After 40 days of self training, AlphaGo Zero became even stronger, outperforming the version of AlphaGo known as “Master”, which has defeated the world’s best players and world number one Ke Jie.

The new AlphaGo Zero is not impressive just because it uses no data set to become the world leader at what it does. It’s impressive also because it achieves the goal at a pace no human will ever be able to match.

OpenAI achieves Continuous Agent Adaptation via Meta-Learning

From Adaptation via Meta-Learning

We’ve evolved a population of 1050 agents of different anatomies (Ant, Bug, Spider), policies (MLP, LSTM), and adaptation strategies (PPO-tracking, RL^2, meta-updates) for 10 epochs. Initially, we had an equal number of agents of each type. Every epoch, we randomly matched 1000 pairs of agents and made them compete and adapt in multi-round games against each other. The agents that lost disappeared from the population, while the winners replicated themselves.

Summary: After a few epochs of evolution, Spiders, being the weakest, disappeared, the subpopulation of Bugs more than doubled, the Ants stayed the same. Importantly, the agents with meta-learned adaptation strategies end up dominating the population.

OpenAI has developed a “learning to learn” (or meta-learning) framework that allows an AI agent to continuously adapt to a dynamic environment, at least in certain conditions. The environment is dynamic for a number of reasons, including the fact that opponents are learning as well.

AI agents equipped with the meta-learning framework win more fights against their opponents and eventually dominate the environment. Be sure to watch the last video to see the effect.

The meta-learning framework gives the selected AI agents the capability to predict and anticipate the changes in the environment and adapt faster than the AI agents that only learn from direct experience.

We know that the neocortex is a prediction machine and that human intelligence amounts to the capability to anticipate and adapt. This research is a key step towards artificial general intelligence.

Continuous Adaptation via Meta-Learning in Nonstationary and Competitive Environments

From [1710.03641] Continuous Adaptation via Meta-Learning in Nonstationary and Competitive Environments

Ability to continuously learn and adapt from limited experience in nonstationary environments is an important milestone on the path towards general intelligence. In this paper, we cast the problem of continuous adaptation into the learning-to-learn framework. We develop a simple gradient-based meta-learning algorithm suitable for adaptation in dynamically changing and adversarial scenarios. Additionally, we design a new multi-agent competitive environment, RoboSumo, and define iterated adaptation games for testing various aspects of continuous adaptation strategies. We demonstrate that meta-learning enables significantly more efficient adaptation than reactive baselines in the few-shot regime. Our experiments with a population of agents that learn and compete suggest that meta-learners are the fittest.

The Market for Bionic Prosthetics

From Medical Bionic Implants And Exoskeletons Market Projected CAGR of 7.5% During the period 2017-2027 – The Edition Truth

The global medical bionic implants and exoskeletons market stood at U$ 454.5 Mn in 2016. It is expected to expand at a CAGR of 7.5% during the period 2017-2027 to reach U$ 1,001.4 Mn. Factors such as rising amputation rates, diabetes, arthritis, trauma cases and expanding ageing demographics have led to a higher number of bionic implants and exoskeletons procedures. According to National Center for Health Statistics, 185,000 new amputations are consistently being performed in the U.S every year. Advancement in new robotics technology (mind-controlled bionic limbs & exoskeletons) coupled with 3D printing is also positively impacting the growth of the market.

This is just the market for addressing a disability or impairment (aka “fixing”). There will be a market for intentional augmentation (aka “improving”).

Biohacker Attempts Editing His DNA With CRISPR

From This Guy Says He’s The First Person To Attempt Editing His DNA With CRISPR

the biohacker claims he’s the first person trying to modify his own genome with the groundbreaking gene-editing technology known as CRISPR. And he’s providing the world with the means to do it, too, by posting a “DIY Human CRISPR Guide” online and selling $20 DNA that promotes muscle growth.

But editing your DNA isn’t as simple as following step-by-step advice. Scientists say that injecting yourself with a gene for muscle growth, as Zayner’s done, won’t in fact pump up your arms. Zayner himself admits that his experiments over the last year haven’t visibly changed his body. There are safety risks, too, experts say: People could infect themselves, or induce an inflammatory reaction.

But to Zayner, whether or not the experiment actually works is besides the point. What he’s trying to demonstrate, Zayner told BuzzFeed News, is that cutting-edge biology tools like CRISPR should be available to people to do as they wish, and not be controlled by academics and pharmaceutical companies.

Another biohacker, Brian Hanley, popular for testing anti-age gene therapy on himself, commented Zayner’s kits with a post on the Institute for Ethics and Emerging Technologies:

Yes, there is a long history of scientists and physicians experimenting on themselves. 15 Nobel prizewinners did it. Hundreds of documented cases of prominent scientists doing it. I am sure there are thousands more such experiments by scientists that are not documented. There have been no documented deaths of scientists by self-experiment since 1928. But it is one thing for someone who really understands what they are doing to perform such experiments, or for qualified people to assist another qualified person. It is quite another thing for Joe programmer biohacker-hopeful to do that without really understanding it because some guy sold him a kit.

The point is not if it’s legit or not, effective or not, legal or not. The point is that there is a growing community of humans that is experimenting, tinkering, and taking risks with their bodies, trying to achieve things that the mainstream audience considers horrifying, impossible, out of reach. This community doesn’t have much credibility today, just like IT security hackers didn’t have much credibility in the early days of the Internet. Today, hacking communities are recruiting pools by top military organizations in the world, and hacking conferences are a prime stage for the biggest software and hardware vendors on the market.

Lost in a sea of pseudo-scientists, impostors, scammers, and amateur wannabe, there are a few serious, determined, fearless explorers of the human body. They won’t look credible until they will.

Cancer incidence increasing globally: The role of relaxed natural selection

From Cancer incidence increasing globally: The role of relaxed natural selection – You – 2017 – Evolutionary Applications

Cancer incidence increase has multiple aetiologies. Mutant alleles accumulation in populations may be one of them due to strong heritability of many cancers. The opportunity for the operation of natural selection has decreased in the past ~150 years because of reduction in mortality and fertility. Mutation-selection balance may have been disturbed in this process and genes providing background for some cancers may have been accumulating in human gene pools. Worldwide, based on the WHO statistics for 173 countries the index of the opportunity for selection is strongly inversely correlated with cancer incidence in peoples aged 0–49 years and in people of all ages. This relationship remains significant when gross domestic product per capita (GDP), life expectancy of older people (e50), obesity, physical inactivity, smoking and urbanization are kept statistically constant for fifteen (15) of twenty-seven (27) individual cancers incidence rates. Twelve (12) cancers which are not correlated with relaxed natural selection after considering the six potential confounders are largely attributable to external causes like viruses and toxins. Ratios of the average cancer incidence rates of the 10 countries with lowest opportunities for selection to the average cancer incidence rates of the 10 countries with highest opportunities for selection are 2.3 (all cancers at all ages), 2.4 (all cancers in 0–49 years age group), 5.7 (average ratios of strongly genetically based cancers) and 2.1 (average ratios of cancers with less genetic background).