Artificial Intelligence

Instantly correct robot mistakes with nothing more than brain signals and the flick of a finger

From How to control robots with brainwaves and hand gestures | MIT News

By monitoring brain activity, the system can detect in real-time if a person notices an error as a robot does a task. Using an interface that measures muscle activity, the person can then make hand gestures to scroll through and select the correct option for the robot to execute.

For the project the team used “Baxter,” a humanoid robot from Rethink Robotics. With human supervision, the robot went from choosing the correct target 70 percent of the time to more than 97 percent of the time.

To create the system the team harnessed the power of electroencephalography (EEG) for brain activity and electromyography (EMG) for muscle activity, putting a series of electrodes on the users’ scalp and forearm.

Both metrics have some individual shortcomings: EEG signals are not always reliably detectable, while EMG signals can sometimes be difficult to map to motions that are any more specific than “move left or right.” Merging the two, however, allows for more robust bio-sensing and makes it possible for the system to work on new users without training.

It is the end of the poker face

From Poppy Crum: Technology that knows what you’re feeling | TED Talk

Your pupil doesn’t lie. Your eye gives away your poker face. When your brain’s having to work harder, your autonomic nervous system drives your pupil to dilate. When it’s not, it contracts. When I take away one of the voices, the cognitive effort to understand the talkers gets a lot easier. I could have put the two voices in different spatial locations, I could have made one louder. You would have seen the same thing. We might think we have more agency over the reveal of our internal state than that spider, but maybe we don’t.

Must-watch.

The moment a company brings to market a mainstream AR wearable, like smart contact lenses, that can act as an application platform, like iOS, and supports the installation of third-party applications through a marketplace, like the App Store, there will be a rush to develop AI apps that can read people’s behaviour in real time, in a way that most human brains cannot.

It doesn’t matter if the intentions are good. Such applications would expose vulnerabilities we are not prepared to defend against.

There’s nothing you can do with a chip in your brain that we can’t do better

From Testing the CTRL-Labs wristband that lets you control computers with your mind – The Verge

CTRL-Labs’ work is built on a technology known as differential electromyography, or EMG. The band’s inside is lined with electrodes, and while they’re touching my skin, they measure electrical pulses along the neurons in my arm. These superlong cells are transmitting orders from my brain to my muscles, so they’re signaling my intentions before I’ve moved or even when I don’t move at all.

EMG is widely used to measure muscle performance, and it’s a promising option for prosthetic limb control. CTRL-Labs isn’t the first company to imagine an EMG-based interface, either. Canadian startup Thalmic Labs sells an EMG gesture-reading armband called the Myo, which detects muscle movements and can handle anything from controlling a computer to translating sign language. (CTRL-Labs used Myo armbands in early prototyping, before designing its own hardware.)

and

One issue is interference from what Bouton refers to as motion artifacts. The bands have to process extraneous data from accidental hand movements, external vibrations, and the electrodes shifting around the skin. “All those things can cause extra signal you don’t want,” he says. An electrode headset, he notes, would face similar problems — but they’re serious issues for either system.

Reardon says CTRL-Labs’ band can pick out far more precise neural activity than the Myo, which Thalmic bills as a muscle-reading system rather than a brain-computer interface. And the band is supposed to work consistently anywhere on the wrist or lower arm, as long as it’s fitted snugly. (The prototype felt like wearing a thick, metallic elastic bracelet.) But Bouton, who uses EMG to find and activate muscles of people with paralysis, says users would get the best results from hitting exactly the same spot every time — which the average person might find difficult. “Even just moving a few millimeters can make a difference,” he says

Long, fascinating profile of CTRL-Labs. I saw them presenting in NYC at the O’Reilly AI Conference, when they announced the availability of their wristband within this years.

From Augmented Reality to Altered Reality

From Dehumanization of Warfare: Legal Implications of New Weapon Technologies:

However, where soldiers are equipped with cybernetic implants (brain-machine interfaces) which mediate between an information source and the brain, the right to “receive and impart information without interference from a public authority” gains a new dimension. There are many technologies which provide additional information to armed forces personnel, e.g., heads-up displays for fighter pilots and the Q-warrior augmented reality helmets from BAE Systems, which are unlikely to impact this right.

However, there are technologies in development which are intended to filter data in order to prevent information overload. This may be particularly relevant where the implant or prosthetic removes visual information from view, or is designed to provide targeting information to the soldier. According to reports, software has been devised in Germany which allows for the deletion of visual information by smart glass or contact lens.

As one futurist was quoted as saying “So if you decide you don’t like homeless people in your city, and you use this software and implant it in your contact lenses, then you won’t see them at all.”

An entire section of this book is dedicated to the legal and ethical implications of using supersoldiers, augmented by bionic prosthetics, augmented reality devices, and neural interfaces, in modern warfare. Highly recommended.

The book is now featured in the “Key Books” section of H+.

Our results are nearly indistinguishable from the real video

From Forget DeepFakes, Deep Video Portraits are way better (and worse) | TechCrunch

Deep Video Portraits is the title of a paper submitted for consideration this August at SIGGRAPH; it describes an improved technique for reproducing the motions, facial expressions, and speech movements of one person using the face of another.

and

There’s no way to make a person do something or make an expression that’s too far from what they do on camera, though. For instance, the system can’t synthesize a big grin if the person is looking sour the whole time (though it might try and fail hilariously). And naturally there are all kinds of little bugs and artifacts. So for now the hijinks are limited.

Astounding results. You must watch the video.

Now, what happens if this video editing happens in real time to alter the reality perceived through AR glasses? For example, the ones a soldier might use.

US Department of Defense has 592 projects powered by Artificial Intelligence

From Pentagon developing artificial intelligence center

Speaking at the House Armed Services Committee April 12, Mattis said “we’re looking at a joint office where we would concentrate all of DoD’s efforts, since we have a number of AI efforts underway right now. We’re looking at pulling them all together.”

He added that the department counts 592 projects as having some form of AI in them, but noted that not all of those make sense to tie into an AI center. And Griffin wants to make sure smaller projects that are close to completion get done and out into prototyping, rather than tied up in the broader AI project.

And then, of course, there are those AI projects so secret that they won’t even be listed among those 592. It would be interesting to see how many of these relate to the super-soldier use case.

Wearable device picks up neuromuscular signals saying words “in your head”

From Computer system transcribes words users “speak silently” | MIT News

MIT researchers have developed a computer interface that can transcribe words that the user verbalizes internally but does not actually speak aloud.

The system consists of a wearable device and an associated computing system. Electrodes in the device pick up neuromuscular signals in the jaw and face that are triggered by internal verbalizations — saying words “in your head” — but are undetectable to the human eye. The signals are fed to a machine-learning system that has been trained to correlate particular signals with particular words.

The device also includes a pair of bone-conduction headphones, which transmit vibrations through the bones of the face to the inner ear. Because they don’t obstruct the ear canal, the headphones enable the system to convey information to the user without interrupting conversation or otherwise interfering with the user’s auditory experience.

and

Using the prototype wearable interface, the researchers conducted a usability study in which 10 subjects spent about 15 minutes each customizing the arithmetic application to their own neurophysiology, then spent another 90 minutes using it to execute computations. In that study, the system had an average transcription accuracy of about 92 percent.

But, Kapur says, the system’s performance should improve with more training data, which could be collected during its ordinary use. Although he hasn’t crunched the numbers, he estimates that the better-trained system he uses for demonstrations has an accuracy rate higher than that reported in the usability study.

Sci-fi movies shaped the collective imaginary about neural interfaces as some sort of hardware port or dongle sticking out of the neck and connecting the human brain to the Internet. But that approach, assuming it’s even possible, is still far away into the future.

This approach is much more feasible. Imagine if this object, AlterEgo, would become the main computer peripheral, replacing keyboard and mouse.
The question is not just about the accuracy, but also how its speed compared to existing input methods.

Watch the video.

MIT terminates collaboration with Nectome

From MIT severs ties to company promoting fatal brain uploading – MIT Technology Review

According to an April 2 statement, MIT will terminate Nectome’s research contract with Media Lab professor and neuroscientist Edward Boyden.

MIT’s connection to the company drew sharp criticism from some neuroscientists, who say brain uploading isn’t possible.

“Fundamentally, the company is based on a proposition that is just false. It is something that just can’t happen,” says Sten Linnarsson of the Karolinska Institute in Sweden.

He adds that by collaborating with Nectome, MIT had lent credibility to the startup and increased the chance that “some people actually kill themselves to donate their brains.”

It didn’t take long.

It’s hard enough to stand the pressure of the press and public opinion for normal companies. It must be impossibly hard to do so when you try to commercialize an attempt to escape death.

Many of the companies that are covered here on H+ face the same challenge.

MIT Intelligence Quest Launch Event Videos

From MIT IQ Launch

On March 1, we convened at Kresge Auditorium on the MIT campus to set out on the MIT Intelligence Quest — an Institute-wide initiative on human and machine intelligence research, its applications, and its bearing on society.

MIT faculty, alumni, students, and friends talked about their work across all aspects of this domain — from unpublished research, to existing commercial enterprises, to the social and ethical implications of AI.

Learn why and how MIT is primed to take the next breakthrough step in advancing the science and applications of intelligence by clicking on the available presentations below.

MIT announced the Intelligence Quest in February. This is the whole launch event, when dozens of presentations were recorded and are now available online.

Must-watch.

Nectome will preserve your brain, but you have to be euthanized first

From A startup is pitching a mind-uploading service that is “100 percent fatal” – MIT Technology Review

Nectome is a preserve-your-brain-and-upload-it company. Its chemical solution can keep a body intact for hundreds of years, maybe thousands, as a statue of frozen glass. The idea is that someday in the future scientists will scan your bricked brain and turn it into a computer simulation. That way, someone a lot like you, though not exactly you, will smell the flowers again in a data server somewhere.

This story has a grisly twist, though. For Nectome’s procedure to work, it’s essential that the brain be fresh. The company says its plan is to connect people with terminal illnesses to a heart-lung machine in order to pump its mix of scientific embalming chemicals into the big carotid arteries in their necks while they are still alive (though under general anesthesia).

The company has consulted with lawyers familiar with California’s two-year-old End of Life Option Act, which permits doctor-assisted suicide for terminal patients, and believes its service will be legal. The product is “100 percent fatal,”

and

In February, they obtained the corpse of an elderly woman and were able to begin preserving her brain just 2.5 hours after her death. It was the first demonstration of their technique, called aldehyde-stabilized cryopreservation, on a human brain.

Fineas Lupeiu, founder of Aeternitas, a company that arranges for people to donate their bodies to science, confirmed that he provided Nectome with the body. He did not disclose the woman’s age or cause of death, or say how much he charged.

The preservation procedure, which takes about six hours, was carried out at a mortuary. “You can think of what we do as a fancy form of embalming that preserves not just the outer details but the inner details,” says McIntyre. He says the woman’s brain is “one of the best-preserved ever,” although her being dead for even a couple of hours damaged it.

China accounted for 48 % of the world’s total AI startup funding in 2017, surpassing the US

From China overtakes US in AI startup funding with a focus on facial recognition and chips – The Verge

The competition between China and the US in AI development is tricky to quantify. While we do have some hard numbers, even they are open to interpretation. The latest comes from technology analysts CB Insights, which reports that China has overtaken the US in the funding of AI startups. The country accounted for 48 percent of the world’s total AI startup funding in 2017, compared to 38 percent for the US.

It’s not a straightforward victory for China, however. In terms of the volume of individual deals, the country only accounts for 9 percent of the total, while the US leads in both the total number of AI startups and total funding overall. The bottom line is that China is ahead when it comes to the dollar value of AI startup funding, which CB Insights says shows the country is “aggressively executing a thoroughly-designed vision for AI.”

I know the guys at CB Insights. Pretty reliable research firm.

AI can predict a heart disease looking at eyes blood vessels with 70% accuracy

From Google’s new AI algorithm predicts heart disease by looking at your eyes – The Verge

Scientists from Google and its health-tech subsidiary Verily have discovered a new way to assess a person’s risk of heart disease using machine learning. By analyzing scans of the back of a patient’s eye, the company’s software is able to accurately deduce data, including an individual’s age, blood pressure, and whether or not they smoke. This can then be used to predict their risk of suffering a major cardiac event — such as a heart attack — with roughly the same accuracy as current leading methods.

and

To train the algorithm, Google and Verily’s scientists used machine learning to analyze a medical dataset of nearly 300,000 patients. This information included eye scans as well as general medical data. As with all deep learning analysis, neural networks were then used to mine this information for patterns, learning to associate telltale signs in the eye scans with the metrics needed to predict cardiovascular risk (e.g., age and blood pressure).

and

When presented with retinal images of two patients, one of whom suffered a cardiovascular event in the following five years, and one of whom did not, Google’s algorithm was able to tell which was which 70 percent of the time. This is only slightly worse than the commonly used SCORE method of predicting cardiovascular risk, which requires a blood test and makes correct predictions in the same test 72 percent of the time.

Now, if you equip a pair of smart glasses with a scanner, you are basically going around with an AI that looks around you and inside you. At the same time. What are the implications?

MIT launches Intelligence Quest, an initiative to discover the foundations of human intelligence

From Institute launches the MIT Intelligence Quest | MIT News

At a time of rapid advances in intelligence research across many disciplines, the Intelligence Quest — MIT IQ — will encourage researchers to investigate the societal implications of their work as they pursue hard problems lying beyond the current horizon of what is known.

Some of these advances may be foundational in nature, involving new insight into human intelligence, and new methods to allow machines to learn effectively. Others may be practical tools for use in a wide array of research endeavors, such as disease diagnosis, drug discovery, materials and manufacturing design, automated systems, synthetic biology, and finance.

and

MIT is poised to lead this work through two linked entities within MIT IQ. One of them, “The Core,” will advance the science and engineering of both human and machine intelligence. A key output of this work will be machine-learning algorithms. At the same time, MIT IQ seeks to advance our understanding of human intelligence by using insights from computer science.

The second entity, “The Bridge” will be dedicated to the application of MIT discoveries in natural and artificial intelligence to all disciplines, and it will host state-of-the-art tools from industry and research labs worldwide.

The Bridge will provide a variety of assets to the MIT community, including intelligence technologies, platforms, and infrastructure; education for students, faculty, and staff about AI tools; rich and unique data sets; technical support; and specialized hardware

and

In order to power MIT IQ and achieve results that are consistent with its ambitions, the Institute will raise financial support through corporate sponsorship and philanthropic giving.

MIT IQ will build on the model that was established with the MIT–IBM Watson AI Lab

What a phenomenal initiative. And MIT is one of the top places in the world to be for AI research.

Artificial General Intelligence might come out of this project.

Ultimately we want a (neuromorphic) chip as big as a fingernail to replace one big (AI) supercomputer

From Engineers design artificial synapse for “brain-on-a-chip” hardware | MIT News

Researchers in the emerging field of “neuromorphic computing” have attempted to design computer chips that work like the human brain. Instead of carrying out computations based on binary, on/off signaling, like digital chips do today, the elements of a “brain on a chip” would work in an analog fashion, exchanging a gradient of signals, or “weights,” much like neurons that activate in various ways depending on the type and number of ions that flow across a synapse.

In this way, small neuromorphic chips could, like the brain, efficiently process millions of streams of parallel computations that are currently only possible with large banks of supercomputers. But one significant hangup on the way to such portable artificial intelligence has been the neural synapse, which has been particularly tricky to reproduce in hardware.

Now engineers at MIT have designed an artificial synapse in such a way that they can precisely control the strength of an electric current flowing across it, similar to the way ions flow between neurons. The team has built a small chip with artificial synapses, made from silicon germanium. In simulations, the researchers found that the chip and its synapses could be used to recognize samples of handwriting, with 95 percent accuracy

and

Instead of using amorphous materials as an artificial synapse, Kim and his colleagues looked to single-crystalline silicon, a defect-free conducting material made from atoms arranged in a continuously ordered alignment. The team sought to create a precise, one-dimensional line defect, or dislocation, through the silicon, through which ions could predictably flow.

The researchers fabricated a neuromorphic chip consisting of artificial synapses made from silicon germanium, each synapse measuring about 25 nanometers across. They applied voltage to each synapse and found that all synapses exhibited more or less the same current, or flow of ions, with about a 4 percent variation between synapses — a much more uniform performance compared with synapses made from amorphous material.

They also tested a single synapse over multiple trials, applying the same voltage over 700 cycles, and found the synapse exhibited the same current, with just 1 percent variation from cycle to cycle.

Commercialization is very far away from this, but what we are talking here is building the foundation for artificial general intelligence (AGI), and before that, for narrow AI that can be embedded in clothes and everyday objects, not just in smartphones and other electronic devices.

Imagine the possibilities if an AI chip would be as cheap, small and ubiquitous as Bluetooth chips are today.

Undetectable mass social influence powered by artificial intelligence is an existential threat to humanity

Julian Assange on Twitter

The future of humanity is the struggle between humans that control machines and machines that control humans.
While the internet has brought about a revolution in our ability to educate each other, the consequent democratic explosion has shaken existing establishments to their core. Burgeoning digital super states such as Google, Facebook and their Chinese equivalents, who are integrated with the existing order, have moved to re-establish discourse control. This is not simply a corrective action. Undetectable mass social influence powered by artificial intelligence is an existential threat to humanity.
While still in its infancy, the geometric nature of this trend is clear. The phenomenon differs from traditional attempts to shape culture and politics by operating at a scale, speed, and increasingly at a subtlety, that appears highly likely to eclipse human counter-measures.
Nuclear war, climate change or global pandemics are existential threats that we can work through with discussion and thought. Discourse is humanity’s immune system for existential threats. Diseases that infect the immune system are usually fatal. In this case, at a planetary scale.

Self-doubting AI vs certain AI

From Google and Others Are Building AI Systems That Doubt Themselves – MIT Technology Review

Researchers at Uber and Google are working on modifications to the two most popular deep-learning frameworks that will enable them to handle probability. This will provide a way for the smartest AI programs to measure their confidence in a prediction or a decision—essentially, to know when they should doubt themselves.

and

“You would like a system that gives you a measure of how certain it is,” says Dustin Tran, who is working on this problem at Google. “If a self-driving car doesn’t know its level of uncertainty, it can make a fatal error, and that can be catastrophic.”

and

Goodman explains that giving deep learning the ability to handle probability can make it smarter in several ways. It could, for instance, help a program recognize things, with a reasonable degree of certainty, from just a few examples rather than many thousands. Offering a measure of certainty rather than a yes-or-no answer should also help with engineering complex systems.

Using Artificial Intelligence to augment human intelligence

From Using Artificial Intelligence to Augment Human Intelligence

in one common view of AI our computers will continue to get better at solving problems, but human beings will remain largely unchanged. In a second common view, human beings will be modified at the hardware level, perhaps directly through neural interfaces, or indirectly through whole brain emulation.

We’ve described a third view, in which AIs actually change humanity, helping us invent new cognitive technologies, which expand the range of human thought. Perhaps one day those cognitive technologies will, in turn, speed up the development of AI, in a virtuous feedback cycle

and

The interface-oriented work we’ve discussed is outside the narrative used to judge most existing work in artificial intelligence. It doesn’t involve beating some benchmark for a classification or regression problem. It doesn’t involve impressive feats like beating human champions at games such as Go. Rather, it involves a much more subjective and difficult-to-measure criterion: is it helping humans think and create in new ways?

This creates difficulties for doing this kind of work, particularly in a research setting. Where should one publish? What community does one belong to? What standards should be applied to judge such work? What distinguishes good work from bad?

A truly remarkable idea that would be infinitely more powerful if not buried under a wall of complexity, making it out of reach for very many readers.

This could be a seminal paper.

Limb reanimation through neuroscience and machine learning

From First paralysed person to be ‘reanimated’ offers neuroscience insights : Nature

A quadriplegic man who has become the first person to be implanted with technology that sends signals from the brain to muscles — allowing him to regain some movement in his right arm hand and wrist — is providing novel insights about how the brain reacts to injury.

Two years ago, 24-year-old Ian Burkhart from Dublin, Ohio, had a microchip implanted in his brain, which facilitates the ‘reanimation’ of his right hand, wrist and fingers when he is wired up to equipment in the laboratory.

and

Bouton and his colleagues took fMRI (functional magnetic resonance imaging) scans of Burkhart’s brain while he tried to mirror videos of hand movements. This identified a precise area of the motor cortex — the area of the brain that controls movement — linked to these movements. Surgery was then performed to implant a flexible chip that detects the pattern of electrical activity arising when Burkhart thinks about moving his hand, and relays it through a cable to a computer. Machine-learning algorithms then translate the signal into electrical messages, which are transmitted to a flexible sleeve that wraps around Burkhart’s right forearm and stimulates his muscles.

Burkhart is currently able to make isolated finger movements and perform six different wrist and hand motions, enabling him to, among other things, pick up a glass of water, and even play a guitar-based video game.

This story is one year and a half old, but I just found out about it and I think it’s a critical piece of the big picture that H+ is trying to narrate.

A growing number of artificial intelligence researchers focus on algorithmic bias

Kate Crawford, Distinguished Research Professor at New York University, a Principal Researcher at Microsoft Research New York, and a Visiting Professor at the MIT Media Lab, presented The Trouble with Bias at the NIPS 2017, the most influential and attended (over 8,000 people) conference on artificial intelligence.

Prof. Crawford is not the only one looking into algorithmic bias. As she shows in her presentation, a growing number of research papers focus on it, and even government agencies have started questioning how AI decisions are made.

Why do I talk about algorithmic bias so frequently on H+? Because in a future were AI augments human brain capabilities, through neural interfaces or other means, the algorithmic bias would manipulate people’s worldview in ways that mass media and politics can’t even dream about.

Before we merge human biology with technology we need to ask really difficult questions about how technology operates outside the body.

A task force to review New York City agencies’ use of algorithms and their bias

From New York City Takes on Algorithmic Discrimination | American Civil Liberties Union

The New York City Council yesterday passed legislation that we are hopeful will move us toward addressing these problems. New York City already uses algorithms to help with a broad range of tasks: deciding who stays in and who gets out of jail, teacher evaluations, firefighting, identifying serious pregnancy complications, and much more. The NYPD also previously used an algorithm-fueled software program developed by Palantir Technologies that takes arrest records, license-plate scans, and other data, and then graphs that data to supposedly help reveal connections between people and even crimes. The department since developed its own software to perform a similar task.

The bill, which is expected to be signed by Mayor Bill de Blasio, will provide a greater understanding of how the city’s agencies use algorithms to deliver services while increasing transparency around them. This bill is the first in the nation to acknowledge the need for transparency when governments use algorithms and to consider how to assess whether their use results in biased outcomes and how negative impacts can be remedied.

The legislation will create a task force to review New York City agencies’ use of algorithms and the policy issues they implicate. The task force will be made up of experts on transparency, fairness, and staff from non-profits that work with people most likely to be harmed by flawed algorithms. It will develop a set of recommendations addressing when and how algorithms should be made public, how to assess whether they are biased, and the impact of such bias.

Timely, as more and more AI researchers look into algorithmic bias.

Importance of Artificial Intelligence to Department of Defense

From Perspectives on Research in Artificial Intelligence and Artificial General Intelligence Relevant to DoD:

That AI and—if it were to advance significantly—AGI are of importance to DoD is so self-evident that it needs little elucidation here. Weapons systems and platforms with varying degrees of autonomy exist today in all domains of modern warfare, including air, sea (surface and underwater), and ground.

To cite a few out of many possible examples: Northrop Grumman’s X-47B is a strike fighter-sized unmanned aircraft, part of the U.S. Navy’s Unmanned Combat Air System (UCAS) Carrier Demonstration program. Currently undergoing flight testing, it is capable of aircraft carrier launch and recovery, as well as autonomous aerial refueling.4 DARPA’s Anti-Submarine Warfare Continuous Trail Unmanned Vessel (ACTUV) program recently commissioned the “Sea Hunter”, a 130 ft. unmanned trimaran optimized to robustly track quiet diesel electric submarines.
The Samsung SGR-A1 is a South Korean military robot sentry designed to replace human counterparts in the Korean demilitarized zone.
It is capable of challenging humans for a spoken password and, if it does not recognize the correct password in response, shooting them with either rubber bullets or lethal ammunition.

It is an important point that, while these systems have some degree of “autonomy” relying on the technologies of AI, they are in no sense a step—not even a small step—towards “autonomy” in the sense of AGI, that is, the ability to set independent goals or intent. Indeed, the word “autonomy” conflates two quite different meanings, one relating to “freedom of will or action” (like humans, or as in AGI), and the other the much more prosaic ability to act in accordance with a possibly complex rule set based on possibly complex sensor input, as in the word “automatic”. In using a terminology like “autonomous weapons”, the DoD may, as an unintended consequence, enhance the public’s confusion on this point.

and

At a higher strategic level, AI is recognized by DoD as a key enabling technology in a possible Third Offset Strategy.

As briefed to JASON, key elements of a Third Offset Strategy include:
(i) autonomous learning systems, e.g., in applications that require faster-than-human reaction times; (ii) human-machine collaborative decision making; (iii) assisted human operations, especially in combat; (iv) advanced strategies for collaboration between manned and unmanned platforms; and (v) network-enabled, autonomous weapons capable of operating in future cyber and electronic warfare environments. AI, as it is currently understood as a field of “6.1” basic research, will supply enabling technologies for all of these elements. At the same time, none of these elements are dependent on future advances in AGI.

JASON is an independent scientific advisory group that provides consulting services to the U.S. government on matters of defense science and technology. It was established in 1960.

JASON typically performs most of its work during an annual summer study, and has conducted studies under contract to the Department of Defense (frequently DARPA and the U.S. Navy), the Department of Energy, the U.S. Intelligence Community, and the FBI. Approximately half of the resulting JASON reports are unclassified.

Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings

From [1607.06520] Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings

The blind application of machine learning runs the risk of amplifying biases present in data. Such a danger is facing us with word embedding, a popular framework to represent text data as vectors which has been used in many machine learning and natural language processing tasks. We show that even word embeddings trained on Google News articles exhibit female/male gender stereotypes to a disturbing extent. This raises concerns because their widespread use, as we describe, often tends to amplify these biases. Geometrically, gender bias is first shown to be captured by a direction in the word embedding. Second, gender neutral words are shown to be linearly separable from gender definition words in the word embedding. Using these properties, we provide a methodology for modifying an embedding to remove gender stereotypes, such as the association between between the words receptionist and female, while maintaining desired associations such as between the words queen and female. We define metrics to quantify both direct and indirect gender biases in embeddings, and develop algorithms to “debias” the embedding. Using crowd-worker evaluation as well as standard benchmarks, we empirically demonstrate that our algorithms significantly reduce gender bias in embeddings while preserving the its useful properties such as the ability to cluster related concepts and to solve analogy tasks. The resulting embeddings can be used in applications without amplifying gender bias.

Our machines can very easily recognise you among at least 2 billion people in a matter of seconds

From Doctor, border guard, policeman – artificial intelligence in China and its mind-boggling potential to do right, or wrong | South China Morning Post

Yitu’s Dragonfly Eye generic portrait platform already has 1.8 billion photographs to work with: those logged in the national database and you, if you have visited China recently. Yitu will not say whether Hong Kong identity card holders have been logged in the government’s database, for which the company provides navigation software and algor­ithms, but 320 million of the photos have come from China’s borders, including ports and airports, where pictures are taken of everyone who enters and leaves the country.

According to Yitu, its platform is also in service with more than 20 provincial public security departments, and is used as part of more than 150 municipal public security systems across the country, and Dragonfly Eye has already proved its worth. On its very first day of operation on the Shanghai Metro, in January, the system identified a wanted man when he entered a station. After matching his face against the database, Dragonfly Eye sent his photo to a policeman, who made an arrest. In the following three months, 567 suspected lawbreakers were caught on the city’s underground network.

Imagine this performed by a human eye augmented by AR lenses or glasses.

If you think that humans will confine this sort of applications to a computer at your desk or inside your pocket, you are delusional.

% Chinese researchers contribution to best 100 AI journals/conferences

The Eurasia Group and Sinovation Ventures released a report titled China embraces AI: A Close Look and A Long View with some interesting data.

The first bit is a chart that shows how the percentage of Chinese researchers contribution to best 100 AI journals/conferences raised from 23% / 25% (authoring/citations) in 2006 to almost 43% / 56% (authoring/citations) in 2015.

The second bit is a list of Chinese AI startups, divided into research/enabling technology/commercial application categories, which also highlights domestic and foreign investors.

With the massive commitment of the Chinese government, these numbers are bound to grow significantly.

I always wondered how it would be if a superior species landed on earth and showed us how they play chess

From Google’s AlphaZero Destroys Stockfish In 100-Game Match – Chess.com

Stockfish, which for most top players is their go-to preparation tool, and which won the 2016 TCEC Championship and the 2017 Chess.com Computer Chess Championship, didn’t stand a chance. AlphaZero won the closed-door, 100-game match with 28 wins, 72 draws, and zero losses.

Oh, and it took AlphaZero only four hours to “learn” chess.

and

“We have always assumed that chess required too much empirical knowledge for a machine to play so well from scratch, with no human knowledge added at all,” Kasparov said. “Of course I’ll be fascinated to see what we can learn about chess from AlphaZero, since that is the great promise of machine learning in general—machines figuring out rules that humans cannot detect. But obviously the implications are wonderful far beyond chess and other games. The ability of a machine to replicate and surpass centuries of human knowledge in complex closed systems is a world-changing tool.”

The progress that DeepMind, and the industry in general, is making in artificial intelligence is breathtaking. Eventually, this feeling of confronting a superior species will become more and more frequent.

The notion of being, for the first time ever, the inferior species is terrifying for most humans. It implies that somebody else can do to us what we do to animals on daily basis. Homo Deus, Yuval Noah Harari new bestseller, drives you to that realization in an amazing way. I can’t recommend it enough.

Google AutoML generates its first outperforming AI child

From Google’s Artificial Intelligence Built an AI That Outperforms Any Made by Humans

In May 2017, researchers at Google Brain announced the creation of AutoML, an artificial intelligence (AI) that’s capable of generating its own AIs. More recently, they decided to present AutoML with its biggest challenge to date, and the AI that can build AI created a “child” that outperformed all of its human-made counterparts.

AutoML acts as a controller neural network that develops a child AI network for a specific task. For this particular child AI, which the researchers called NASNet, the task was recognizing objects — people, cars, traffic lights, handbags, backpacks, etc. — in a video in real-time.

and

NASNet was 82.7 percent accurate at predicting images on ImageNet’s validation set. This is 1.2 percent better than any previously published results, and the system is also 4 percent more efficient, with a 43.1 percent mean Average Precision (mAP)

and

The Google researchers acknowledge that NASNet could prove useful for a wide range of applications and have open-sourced the AI for inference on image classification and object detection. “We hope that the larger machine learning community will be able to build on these models to address multitudes of computer vision problems we have not yet imagined,” they wrote in their blog post.

Though the applications for NASNet and AutoML are plentiful, the creation of an AI that can build AI does raise some concerns. For instance, what’s to prevent the parent from passing down unwanted biases to its child? What if AutoML creates systems so fast that society can’t keep up?

We are waiting to develop a human-level artificial intelligence and see if it will improve itself to the point of becoming a superintelligence. Maybe it’s exceptionally close.

We are entering a cycle where humans and algorithms are adapting to each other

From Exploring Cooperation with Social Machines:

Humans are filling in the gaps where algorithms cannot easily function, and algorithms are calculating and processing complex information at a speed that for most humans is not possible. Together, humans and computers are sorting out which is going to do what type of task. It is a slow and tedious process that emulates a kind of sociability between entities in order to form cooperative outcomes.

Either one or both parties must yield a bit for cooperation to work, and if a program is developed in a rigid way, the yielding is usually done by the human to varying degrees of frustration as agency (our ability to make choices from a range of options) becomes constrained by the process of automation.

Indeed, sociability and social relationships depend on the assumption of agency on the part of the other, human or machine. Humans often attribute agency to machines in their assumptions underlying how the machine will satisfy their present need, or indeed inhibit them from satisfying a need.

You should also read Implementing Algorithms In The Form Of Scripts Has Been An Early Step In Training Humans To Be More Like Machines

Implementing algorithms in the form of scripts has been an early step in training humans to be more like machines

From Cooperating with Algorithms in the Workplace:

Thus, concerning algorithms at work, people are either replaced by them, required to help them, or have become them. Workplace algorithms have been evolving for some time in the form of scripts and processes that employers have put in place for efficiency, “quality control,” brand consistency, product consistency, experience consistency and most particularly, cost savings. As a result phone calls to services such as hotels, shops and restaurants, may now have a script read out loud or memorized by the employee to the customer to ensure consistent experiences and task compliance.

Consistency of experience is increasingly a goal within organizations, and implementing algorithms in the form of scripts and processes has been an early step in training humans to be more like machines. Unfortunately, these algorithms can result in an inability to cooperate in contexts not addressed by the algorithm. These scripts and corresponding processes purposely greatly restrict human agency by failing to define clear boundaries for the domain of the algorithm and recognizing the need for adaptation outside these boundaries.

Thus, often if a worker is asked a specialized or specific query, they lack the ability to respond to it and will either turn away the customer, or accelerate the query up (and down) a supervisory management chain, with each link bound by its own scripts, processes and rules, which may result in a non-answer or non-resolution for the customer.

Not only the paper is mighty interesting, but the whole body of research it belongs too is worth serious investigation.

Also, this TED Talk by David Lee touches the topic in quite an interesting way: Why jobs of the future won’t feel like work

What is consciousness, and could machines have it?

From What is consciousness, and could machines have it? | Science

The controversial question of whether machines may ever be conscious must be based on a careful consideration of how consciousness arises in the only physical system that undoubtedly possesses it: the human brain. We suggest that the word “consciousness” conflates two different types of information-processing computations in the brain: the selection of information for global broadcasting, thus making it flexibly available for computation and report (C1, consciousness in the first sense), and the self-monitoring of those computations, leading to a subjective sense of certainty or error (C2, consciousness in the second sense). We argue that despite their recent successes, current machines are still mostly implementing computations that reflect unconscious processing (C0) in the human brain. We review the psychological and neural science of unconscious (C0) and conscious computations (C1 and C2) and outline how they may inspire novel machine architectures.

We no longer know if we’re seeing the same information or what anybody else is seeing

From Zeynep Tufekci: We’re building a dystopia just to make people click on ads | TED.com

As a public and as citizens, we no longer know if we’re seeing the same information or what anybody else is seeing, and without a common basis of information, little by little, public debate is becoming impossible, and we’re just at the beginning stages of this.

and

What if the system that we do not understand was picking up that it’s easier to sell Vegas tickets to people who are bipolar and about to enter the manic phase. Such people tend to become overspenders, compulsive gamblers. They could do this, and you’d have no clue that’s what they were picking up on. I gave this example to a bunch of computer scientists once and afterwards, one of them came up to me. He was troubled and he said, “That’s why I couldn’t publish it.” I was like, “Couldn’t publish what?” He had tried to see whether you can indeed figure out the onset of mania from social media posts before clinical symptoms, and it had worked, and it had worked very well, and he had no idea how it worked or what it was picking up on.

and

Now, don’t get me wrong, we use digital platforms because they provide us with great value. I use Facebook to keep in touch with friends and family around the world. I’ve written about how crucial social media is for social movements. I have studied how these technologies can be used to circumvent censorship around the world. But it’s not that the people who run, you know, Facebook or Google are maliciously and deliberately trying to make the country or the world more polarized and encourage extremism. I read the many well-intentioned statements that these people put out. But it’s not the intent or the statements people in technology make that matter, it’s the structures and business models they’re building. And that’s the core of the problem. Either Facebook is a giant con of half a trillion dollars and ads don’t work on the site, it doesn’t work as a persuasion architecture, or its power of influence is of great concern. It’s either one or the other. It’s similar for Google, too.

Longer than usual (23 min) TED talk, but worth it.

I, too, believe that there’s no malicious intent behind the increasingly capable AI we see these days. Quite the opposite, I believe that most people working at Google or Facebook are there to make a positive impact, to change the world for the better. The problem is, on top of the business model, the fact that a lot of people, even the most brilliant ones, don’t take the time to ponder the long-term consequences of the things they are building in the way they are building them today.

The minimum dataset scale for deep learning

From Google Brain chief: Deep learning takes at least 100,000 examples | VentureBeat

“I would say pretty much any business that has tens or hundreds of thousands of customer interactions has enough scale to start thinking about using these sorts of things,” Jeff Dean, a senior fellow at Google, said in an onstage interview at the VB Summit in Berkeley, California. “If you only have 10 examples of something, it’s going to be hard to make deep learning work. If you have 100,000 things you care about, records or whatever, that’s the kind of scale where you should really start thinking about these kinds of techniques.”

The dangerous rush to build AI expertise

From Lyft’s biggest AI challenge is getting engineers up to speed | VentureBeat

Machine learning and deep learning AI have gone from the niche realm of PhDs to tools that will be used throughout all types of companies. That equates to a big skills gap, says Gil Arditi, product lead for Lyft’s Machine Learning Platform.

and

Today, of course, any engineer with a modicum of experience can spin up databases on user-friendly cloud services. That’s the path that AI processes have to travel, he says. Luckily, machine learning is making AI more accessible to newbies without a PhD in statistics, mathematics, or computer science.

“Part of the promise of machine learning in general but deep learning in particular … is that there actually is not a lot of statistical modeling,” said Arditi. “Instead of giving to the machines exact formulas that will address the problem, you just give it the tools and treat it like a black box.”

From LinkedIn plans to teach all its engineers the basics of using AI | VentureBeat

Today, of course, any engineer with a modicum of experience can spin up databases on user-friendly cloud services. That’s the path that AI processes have to travel, he says. Luckily, machine learning is making AI more accessible to newbies without a PhD in statistics, mathematics, or computer science.

“Part of the promise of machine learning in general but deep learning in particular … is that there actually is not a lot of statistical modeling,” said Arditi. “Instead of giving to the machines exact formulas that will address the problem, you just give it the tools and treat it like a black box.”

and

The academy isn’t designed to give engineers an academic grounding in machine learning as a discipline. It’s designed instead to prepare them for using AI in much the same way that they’d use a system like QuickSort, an algorithm for sorting data that’s fed into it. Users don’t have to understand how the underlying system works, they just need to know the right way to implement it.

That’s the goal for LinkedIn, Agarwal said. Thus far, six engineers have made it through the AI academy and are deploying machine learning models in production as a result of what they learned. The educational program still has a ways to go (Agarwal said he’d grade it about a “C+ at the moment) but it has the potential to drastically affect LinkedIn’s business.

From Tech Giants Are Paying Huge Salaries for Scarce A.I. Talent – The New York Times

Typical A.I. specialists, including both Ph.D.s fresh out of school and people with less education and just a few years of experience, can be paid from $300,000 to $500,000 a year or more in salary and company stock, according to nine people who work for major tech companies or have entertained job offers from them. All of them requested anonymity because they did not want to damage their professional prospects.

Well-known names in the A.I. field have received compensation in salary and shares in a company’s stock that total single- or double-digit millions over a four- or five-year period. And at some point they renew or negotiate a new contract, much like a professional athlete

and

Most of all, there is a shortage of talent, and the big companies are trying to land as much of it as they can. Solving tough A.I. problems is not like building the flavor-of-the-month smartphone app. In the entire world, fewer than 10,000 people have the skills necessary to tackle serious artificial intelligence research, according to Element AI, an independent lab in Montreal.

Two thoughts:

  • This is unprecedented in the last two decades. Not even the raise of virtualization or cloud computing triggered such a massive call to action.
  • Do you really think that all these education programs and all these rushed experts will spend any significant time on the ethical aspects of AI and long-term implications of algorithmic bias?

NATO calls for a specialists meeting about artificial intelligence in mid 2018

From NATO urged to rapidly absorb AI into its command and control | Jane’s 360

NATO advisers and industry are urging the allies to rapidly absorb artificial intelligence software into their militaries’ observe, orient, decide, act (OODA) loop or risk seeing the latter collapse in the face of adversaries’ increasingly sophisticated artificial intelligence (AI)-enabled attacks.

NATO Information Systems Technology (IST) Panel Office already arranged a 150 people meeting in Bordeaux for end of May 2018:

In order to avoid an abstract scientific discussion, the national STB representatives will engage operational experts to participate and work with the scientists towards a common road map for future research activities in NATO that meet operational needs.

Within the OODA loop the first step ‘Observe’ is about harvesting data. Intelligent integration of heterogeneous devices, architectures of acquisition systems and sensors, decentralized management of data, and autonomous collection platforms and sensors give a huge field for improvement with Natural Language Processing and Artificial Intelligence technologies for acquiring and processing Big Data. The next step ‘Orient’ is about reasoning. Analysis of social media, information fusion, anomaly detection, and behavior modeling are domains with huge potential for Machine Learning algorithms. The same is applicable for the ‘Decide’ step where predictive analytics, augmented and virtual reality and many more technologies support the operational decision-making process. A complex battlefield and high speed operations require independently acting devices to ‘Act’ with a certain degree of Autonomy. In all steps, the application of AI technologies for automated analysis, early warnings, guaranteeing trust in the Internet of Things (IoT), and distinguishing relevant from Fake Data is mandatory.

This is the escalation Nick Bostrom first (in its book Superintelligence) and Elon Musk later were talking about.

AVA: A Video Dataset of Spatio-temporally Localized Atomic Visual Actions

From [1705.08421] AVA: A Video Dataset of Spatio-temporally Localized Atomic Visual Actions

This paper introduces a video dataset of spatio-temporally localized Atomic Visual Actions (AVA). The AVA dataset densely annotates 80 atomic visual actions in 57.6k movie clips with actions localized in space and time, resulting in 210k action labels with multiple labels per human occurring frequently. The main differences with existing video datasets are: the definition of atomic visual actions, which avoids collecting data for each and every complex action; precise spatio-temporal annotations with possibly multiple annotations for each human; the use of diverse, realistic video material (movies). This departs from existing datasets for spatio-temporal action recognition, such as JHMDB and UCF datasets, which provide annotations for at most 24 composite actions, such as basketball dunk, captured in specific environments, i.e., basketball court.
We implement a state-of-the-art approach for action localization. Despite this, the performance on our dataset remains low and underscores the need for developing new approaches for video understanding. The AVA dataset is the first step in this direction, and enables the measurement of performance and progress in realistic scenarios.

Google confirms it’s using YouTube to teach AI about human actions

From Google built a dataset to teach its artificial intelligence how humans hug, cook, and fight — Quartz

Google, which owns YouTube, announced on Oct. 19 a new dataset of film clips, designed to teach machines how humans move in the world. Called AVA, or “atomic visual actions,” the videos aren’t anything special to human eyes—they’re three second clips of people drinking water and cooking curated from YouTube. But each clip is bundled with a file that outlines the person that a machine learning algorithm should watch, as well as a description of their pose, and whether they’re interacting with another human or object. It’s the digital version of pointing at a dog with a child and coaching them by saying, “dog.”

and

This technology could help Google to analyze the years of video it processes on YouTube every day. It could be applied to better target advertising based on whether you’re watching a video of people talk or fight, or in content moderation. The eventual goal is to teach computers social visual intelligence, the authors write in an accompanying research paper, which means “understanding what humans are doing, what might they do next, and what they are trying to achieve.”

Google’s video dataset is free.

In 2015, I speculated on Twitter:

I wonder if @google already has enough @youtube videos to create a video version of Wikipedia (and if they already are machine learning it)

We want the UAE to become the world’s most prepared country for artificial intelligence 

From Mohammad Bin Rashid reveals reshuffled UAE Cabinet | Gulfnews.com

The new government – the 13th in the UAE’s history – sees the appointment of Omar Bin Sultan Al Olama (right), 27, as the State Minister for Artificial Intelligence.

“We want the UAE to become the world’s most prepared country for artificial intelligence,” Shaikh Mohammad said.

Shaikh Mohammad added the new phase focuses on “future skills, future sciences and future technology, as we prepare for the centenary to ensure a better future for our generations”.

After Russia and China, the United Arab Emirates wants to make clear, too, that AI is a strategic advantage and a top priority.

Mastering the game of Go without human knowledge

From Mastering the game of Go without human knowledge : Nature

A long-standing goal of artificial intelligence is an algorithm that learns, tabula rasa, superhuman proficiency in challenging domains. Recently, AlphaGo became the first program to defeat a world champion in the game of Go. The tree search in AlphaGo evaluated positions and selected moves using deep neural networks. These neural networks were trained by supervised learning from human expert moves, and by reinforcement learning from self-play. Here we introduce an algorithm based solely on reinforcement learning, without human data, guidance or domain knowledge beyond game rules. AlphaGo becomes its own teacher: a neural network is trained to predict AlphaGo’s own move selections and also the winner of AlphaGo’s games. This neural network improves the strength of the tree search, resulting in higher quality move selection and stronger self-play in the next iteration. Starting tabula rasa, our new program AlphaGo Zero achieved superhuman performance, winning 100–0 against the previously published, champion-defeating AlphaGo.

Accumulating thousands of years of human knowledge during a period of just a few days

From AlphaGo Zero: Learning from scratch | DeepMind

It is able to do this by using a novel form of reinforcement learning, in which AlphaGo Zero becomes its own teacher. The system starts off with a neural network that knows nothing about the game of Go. It then plays games against itself, by combining this neural network with a powerful search algorithm. As it plays, the neural network is tuned and updated to predict moves, as well as the eventual winner of the games.

This updated neural network is then recombined with the search algorithm to create a new, stronger version of AlphaGo Zero, and the process begins again. In each iteration, the performance of the system improves by a small amount, and the quality of the self-play games increases, leading to more and more accurate neural networks and ever stronger versions of AlphaGo Zero.

and

After just three days of self-play training, AlphaGo Zero emphatically defeated the previously published version of AlphaGo – which had itself defeated 18-time world champion Lee Sedol – by 100 games to 0. After 40 days of self training, AlphaGo Zero became even stronger, outperforming the version of AlphaGo known as “Master”, which has defeated the world’s best players and world number one Ke Jie.

The new AlphaGo Zero is not impressive just because it uses no data set to become the world leader at what it does. It’s impressive also because it achieves the goal at a pace no human will ever be able to match.

OpenAI achieves Continuous Agent Adaptation via Meta-Learning

From Adaptation via Meta-Learning

We’ve evolved a population of 1050 agents of different anatomies (Ant, Bug, Spider), policies (MLP, LSTM), and adaptation strategies (PPO-tracking, RL^2, meta-updates) for 10 epochs. Initially, we had an equal number of agents of each type. Every epoch, we randomly matched 1000 pairs of agents and made them compete and adapt in multi-round games against each other. The agents that lost disappeared from the population, while the winners replicated themselves.

Summary: After a few epochs of evolution, Spiders, being the weakest, disappeared, the subpopulation of Bugs more than doubled, the Ants stayed the same. Importantly, the agents with meta-learned adaptation strategies end up dominating the population.

OpenAI has developed a “learning to learn” (or meta-learning) framework that allows an AI agent to continuously adapt to a dynamic environment, at least in certain conditions. The environment is dynamic for a number of reasons, including the fact that opponents are learning as well.

AI agents equipped with the meta-learning framework win more fights against their opponents and eventually dominate the environment. Be sure to watch the last video to see the effect.

The meta-learning framework gives the selected AI agents the capability to predict and anticipate the changes in the environment and adapt faster than the AI agents that only learn from direct experience.

We know that the neocortex is a prediction machine and that human intelligence amounts to the capability to anticipate and adapt. This research is a key step towards artificial general intelligence.

Continuous Adaptation via Meta-Learning in Nonstationary and Competitive Environments

From [1710.03641] Continuous Adaptation via Meta-Learning in Nonstationary and Competitive Environments

Ability to continuously learn and adapt from limited experience in nonstationary environments is an important milestone on the path towards general intelligence. In this paper, we cast the problem of continuous adaptation into the learning-to-learn framework. We develop a simple gradient-based meta-learning algorithm suitable for adaptation in dynamically changing and adversarial scenarios. Additionally, we design a new multi-agent competitive environment, RoboSumo, and define iterated adaptation games for testing various aspects of continuous adaptation strategies. We demonstrate that meta-learning enables significantly more efficient adaptation than reactive baselines in the few-shot regime. Our experiments with a population of agents that learn and compete suggest that meta-learners are the fittest.

“Be careful; things can be worse than they appear”: Understanding Biased Algorithms and Users’ Behavior around Them in Rating Platforms

From http://social.cs.uiuc.edu/papers/ICWSM17-PrePrint.pdf

Awareness of bias in algorithms is growing among scholars and users of algorithmic systems. But what can we observe about how users discover and behave around such biases?
We used a cross-platform audit technique that analyzed online ratings of 803 hotels across three hotel rating platforms and found that one site’s algorithmic rating system biased ratings, particularly low-to-medium quality hotels, significantly higher than others (up to 37%).

Analyzing reviews of 162 users who independently discovered this bias, we seek to understand if, how, and in what ways users perceive and manage this bias. Users changed the typical ways they used a review on a hotel rating platform to instead discuss the rating system itself and raise other users’ awareness of the rating bias. This raising of awareness included practices like efforts to reverse engineer the rating algorithm, efforts to correct the bias, and demonstrations of broken trust.

We conclude with a discussion of how such behavior patterns might inform design approaches that anticipate unexpected bias and provide reliable means for meaningful bias discovery and response.

The hippocampus as a predictive map

From The hippocampus as a predictive map : Nature

A cognitive map has long been the dominant metaphor for hippocampal function, embracing the idea that place cells encode a geometric representation of space. However, evidence for predictive coding, reward sensitivity and policy dependence in place cells suggests that the representation is not purely spatial. We approach this puzzle from a reinforcement learning perspective: what kind of spatial representation is most useful for maximizing future reward? We show that the answer takes the form of a predictive representation. This representation captures many aspects of place cell responses that fall outside the traditional view of a cognitive map. Furthermore, we argue that entorhinal grid cells encode a low-dimensionality basis set for the predictive representation, useful for suppressing noise in predictions and extracting multiscale structure for hierarchical planning.

DeepMind thinks that the hippocampus summarizes future events using a “predictive map”

From The hippocampus as a ‘predictive map’ | DeepMind

Our insights were derived from reinforcement learning, the subdiscipline of AI research that focuses on systems that learn by trial and error. The key computational idea we drew on is that to estimate future reward, an agent must first estimate how much immediate reward it expects to receive in each state, and then weight this expected reward by how often it expects to visit that state in the future. By summing up this weighted reward across all possible states, the agent obtains an estimate of future reward.

Similarly, we argue that the hippocampus represents every situation – or state – in terms of the future states which it predicts. For example, if you are leaving work (your current state) your hippocampus might represent this by predicting that you will likely soon be on your commute, picking up your kids from school or, more distantly, at home. By representing each current state in terms of its anticipated successor states, the hippocampus conveys a compact summary of future events, known formally as the “successor representation”. We suggest that this specific form of predictive map allows the brain to adapt rapidly in environments with changing rewards, but without having to run expensive simulations of the future.

I wonder what Jeff Hawkins thinks about this new theory.

Why Does the Neocortex Have Columns, A Theory of Learning the Structure of the World

From Why Does the Neocortex Have Columns, A Theory of Learning the Structure of the World | bioRxiv

Neocortical regions are organized into columns and layers. Connections between layers run mostly perpendicular to the surface suggesting a columnar functional organization. Some layers have long-range excitatory lateral connections suggesting interactions between columns. Similar patterns of connectivity exist in all regions but their exact role remain a mystery. In this paper, we propose a network model composed of columns and layers that performs robust object learning and recognition. Each column integrates its changing input over time to learn complete predictive models of observed objects. Excitatory lateral connections across columns allow the network to more rapidly infer objects based on the partial knowledge of adjacent columns. Because columns integrate input over time and space, the network learns models of complex objects that extend well beyond the receptive field of individual cells. Our network model introduces a new feature to cortical columns. We propose that a representation of location relative to the object being sensed is calculated within the sub-granular layers of each column. The location signal is provided as an input to the network, where it is combined with sensory data. Our model contains two layers and one or more columns. Simulations show that using Hebbian-like learning rules small single-column networks can learn to recognize hundreds of objects, with each object containing tens of features. Multi-column networks recognize objects with significantly fewer movements of the sensory receptors. Given the ubiquity of columnar and laminar connectivity patterns throughout the neocortex, we propose that columns and regions have more powerful recognition and modeling capabilities than previously assumed.

Not on artificial intelligence per se, but Jeff Hawkins was the first to suggest a unifying theory of how the brain works in 2005 with his book On Intelligence. It’s interesting to see how the theory has been refined in last 12 years and how it might influence today’s development of AI algorithms.

Chinese state plan to dominate AI by 2030

From China’s Plan to ‘Lead’ in AI: Purpose, Prospects, and Problems

The plan prescribes a high level of government investment in theoretical and applied AI breakthroughs (see Part III below for more), while also acknowledging that, in China as around the world, private companies are currently leading the charge on commercial applications of AI.

The plan acknowledges, meanwhile, that China remains far behind world leaders in development of key hardware enablers of AI, such as microchips suited for machine learning use (e.g., GPUs or re-configurable processors). The plan’s ambition is underlined by its recognition of the hard road ahead.

and

China is embarking upon an agenda of “intelligentization” (智能化), seeking to take advantage of the transformative potential of AI throughout society, the economy, government, and the military. Through this new plan, China intends to pursue “indigenous innovation” in the “strategic frontier” technology of AI in furtherance of a national strategy for innovation-driven development.

the Chinese government is encouraging its own AI enterprises to pursue an approach of “going out,” including through overseas mergers and acquisitions, equity investments, and venture capital, as well as the establishment of research and development centers abroad.

China plans to develop resources and ecosystems conducive to the goal of becoming a “premier innovation center” in AI science and technology by 2030. In support of this goal, the plan calls for an “open source and open” approach that takes advantage of synergies among industry, academia, research, and applications, including through creating AI “innovation clusters.”

and

the Chinese leadership wants to ensure that advances in AI can be leveraged for national defense, through a national strategy for military-civil fusion (军民融合). According to the plan, resources and advances will be shared and transferred between civilian and military contexts. This will involve the establishment and normalizing of mechanisms for communication and coordination among scientific research institutes, universities, enterprises, and military industry.

Full translation of China’s State Council Notice on the Issuance of the Next Generation Artificial Intelligence Development Plan – Both the original document and the commentary on NewAmerica are critical reads.

Startup generates and sells synthetic data for AI training

From Home – Neuromation

We propose a solution whose accuracy is guaranteed by construction: synthesizing large datasets along with perfectly accurate labels. The benefits of synthetic data are manifold. It is fast to synthesize and render, perfectly accurate, tailored for the task at hand, and can be modified to improve the model and training itself. It is important to note that real data with accurate labels is still required for evaluating models trained on synthetic data, in order to guarantee acceptable performance at inference time. However, the amount of validation data required is orders of magnitude smaller than training data!

They generate and sell synthetic datasets for AI training. All data is charged per item, and comes pre-labelled.

All transactions get done using Ethereum extended ECR-20 compliant token. People can mine tokens by performing computationally intensive tasks of data generation and model training instead of mining cryptocurrency.

Nick Bostrom joins newly formed Ethics & Society research group at DeepMind

From DeepMind launches new research team to investigate AI ethics – The Verge

Google’s AI subsidiary DeepMind is getting serious about ethics. The UK-based company, which Google bought in 2014, today announced the formation of a new research group dedicated to the thorniest issues in artificial intelligence. These include the problems of managing AI bias; the coming economic impact of automation; and the need to ensure that any intelligent systems we develop share our ethical and moral values.

DeepMind Ethics & Society (or DMES, as the new team has been christened) will publish research on these topics and others starting early 2018. The group has eight full-time staffers at the moment, but DeepMind wants to grow this to around 25 in a year’s time. The team has six unpaid external “fellows” (including Oxford philosopher Nick Bostrom, who literally wrote the book on AI existential risk) and will partner with academic groups conducting similar research, including The AI Now Institute at NYU, and the Leverhulme Centre for the Future of Intelligence.

Great effort. I’d love to attend a conference arranged by groups like this one.

Information Bottleneck Theory might explain how deep (and human) learning works

From New Theory Cracks Open the Black Box of Deep Learning | Quanta Magazine

Experts wonder what it is about deep learning that enables generalization — and to what extent brains apprehend reality in the same way.

Last month, a YouTube video of a conference talk in Berlin, shared widely among artificial-intelligence researchers, offered a possible answer. In the talk, Naftali Tishby, a computer scientist and neuroscientist from the Hebrew University of Jerusalem, presented evidence in support of a new theory explaining how deep learning works. Tishby argues that deep neural networks learn according to a procedure called the “information bottleneck,” which he and two collaborators first described in purely theoretical terms in 1999. The idea is that a network rids noisy input data of extraneous details as if by squeezing the information through a bottleneck, retaining only the features most relevant to general concepts.

and

According to Tishby, who views the information bottleneck as a fundamental principle behind learning, whether you’re an algorithm, a housefly, a conscious being, or a physics calculation of emergent behavior, that long-awaited answer “is that the most important part of learning is actually forgetting.”

but

Brenden Lake, an assistant professor of psychology and data science at New York University who studies similarities and differences in how humans and machines learn, said that Tishby’s findings represent “an important step towards opening the black box of neural networks,” but he stressed that the brain represents a much bigger, blacker black box. Our adult brains, which boast several hundred trillion connections between 86 billion neurons, in all likelihood employ a bag of tricks to enhance generalization, going beyond the basic image- and sound-recognition learning procedures that occur during infancy and that may in many ways resemble deep learning.

For instance, Lake said the fitting and compression phases that Tishby identified don’t seem to have analogues in the way children learn handwritten characters, which he studies. Children don’t need to see thousands of examples of a character and compress their mental representation over an extended period of time before they’re able to recognize other instances of that letter and write it themselves. In fact, they can learn from a single example.

The video is here.

Top academic and industry minds in a panel about the future of AI

From Superintelligence: Science or Fiction? | Elon Musk & Other Great Minds – YouTube

From left to right: Elon Musk (Tesla, SpaceX), Stuart Russell (University Berkeley), Bart Selman (Cornell University), Ray Kurzweil (Google, inventor, futurist), David Chalmers (New York University, Australian National University, philosopher), Nick Bostrom (University of Oxford, philosopher), Demis Hassabis (DeepMind), Sam Harris (author, philosopher, neuroscientist, atheist), and Jaan Tallinn (Skype, Kaaza) discuss with Max Tegmark (moderator) what likely outcomes might be if we succeed in building human-level AGI, and also what we would like to happen.

Max Tegmark put in a room some of the brightest minds of our times to discuss Artificial General Intelligence and Superintelligence. This is the video of the most significative panel at that event, the Beneficial AI 2017 conference.

It’s 1h video, totally worth your time.

On Cognitive Computing vs Artificial Intelligence

From Ginni Rometty on Artificial Intelligence – Bloomberg

Doctors don’t want black-and-white answers, nor does any profession. If you’re a professional, my guess is when you interact with AI, you don’t want it to say, “Here is an answer.” What a doctor wants is, “OK, give me the possible answers. Tell my why you believe it. Can I see the research, the evidence, the ‘percent confident’? What more would you like to know?”

and

When I went to Davos in January, we published something called Transparency and Trust in the Cognitive Era. It’s our responsibility if we build this stuff to guide it safely into the world. First, be clear on the purpose, work with man. We aren’t out here to destroy man. The second is to be transparent about who trained the computers, who are the experts, where did the data come from. And when consumers are using AI, you inform them that they are and inform the company as well that owns the intellectual property. And the third thing is to be committed to skill.

IBM and its term “cognitive computing” are all about so-called “weak AI”. The problem is that giving the insight about an answer is incredibly challenging at the moment vs just giving the answer in a black-box fashion.