Artificial Intelligence

China accounted for 48 % of the world’s total AI startup funding in 2017, surpassing the US

From China overtakes US in AI startup funding with a focus on facial recognition and chips – The Verge

The competition between China and the US in AI development is tricky to quantify. While we do have some hard numbers, even they are open to interpretation. The latest comes from technology analysts CB Insights, which reports that China has overtaken the US in the funding of AI startups. The country accounted for 48 percent of the world’s total AI startup funding in 2017, compared to 38 percent for the US.

It’s not a straightforward victory for China, however. In terms of the volume of individual deals, the country only accounts for 9 percent of the total, while the US leads in both the total number of AI startups and total funding overall. The bottom line is that China is ahead when it comes to the dollar value of AI startup funding, which CB Insights says shows the country is “aggressively executing a thoroughly-designed vision for AI.”

I know the guys at CB Insights. Pretty reliable research firm.

AI can predict a heart disease looking at eyes blood vessels with 70% accuracy

From Google’s new AI algorithm predicts heart disease by looking at your eyes – The Verge

Scientists from Google and its health-tech subsidiary Verily have discovered a new way to assess a person’s risk of heart disease using machine learning. By analyzing scans of the back of a patient’s eye, the company’s software is able to accurately deduce data, including an individual’s age, blood pressure, and whether or not they smoke. This can then be used to predict their risk of suffering a major cardiac event — such as a heart attack — with roughly the same accuracy as current leading methods.

and

To train the algorithm, Google and Verily’s scientists used machine learning to analyze a medical dataset of nearly 300,000 patients. This information included eye scans as well as general medical data. As with all deep learning analysis, neural networks were then used to mine this information for patterns, learning to associate telltale signs in the eye scans with the metrics needed to predict cardiovascular risk (e.g., age and blood pressure).

and

When presented with retinal images of two patients, one of whom suffered a cardiovascular event in the following five years, and one of whom did not, Google’s algorithm was able to tell which was which 70 percent of the time. This is only slightly worse than the commonly used SCORE method of predicting cardiovascular risk, which requires a blood test and makes correct predictions in the same test 72 percent of the time.

Now, if you equip a pair of smart glasses with a scanner, you are basically going around with an AI that looks around you and inside you. At the same time. What are the implications?

MIT launches Intelligence Quest, an initiative to discover the foundations of human intelligence

From Institute launches the MIT Intelligence Quest | MIT News

At a time of rapid advances in intelligence research across many disciplines, the Intelligence Quest — MIT IQ — will encourage researchers to investigate the societal implications of their work as they pursue hard problems lying beyond the current horizon of what is known.

Some of these advances may be foundational in nature, involving new insight into human intelligence, and new methods to allow machines to learn effectively. Others may be practical tools for use in a wide array of research endeavors, such as disease diagnosis, drug discovery, materials and manufacturing design, automated systems, synthetic biology, and finance.

and

MIT is poised to lead this work through two linked entities within MIT IQ. One of them, “The Core,” will advance the science and engineering of both human and machine intelligence. A key output of this work will be machine-learning algorithms. At the same time, MIT IQ seeks to advance our understanding of human intelligence by using insights from computer science.

The second entity, “The Bridge” will be dedicated to the application of MIT discoveries in natural and artificial intelligence to all disciplines, and it will host state-of-the-art tools from industry and research labs worldwide.

The Bridge will provide a variety of assets to the MIT community, including intelligence technologies, platforms, and infrastructure; education for students, faculty, and staff about AI tools; rich and unique data sets; technical support; and specialized hardware

and

In order to power MIT IQ and achieve results that are consistent with its ambitions, the Institute will raise financial support through corporate sponsorship and philanthropic giving.

MIT IQ will build on the model that was established with the MIT–IBM Watson AI Lab

What a phenomenal initiative. And MIT is one of the top places in the world to be for AI research.

Artificial General Intelligence might come out of this project.

Ultimately we want a (neuromorphic) chip as big as a fingernail to replace one big (AI) supercomputer

From Engineers design artificial synapse for “brain-on-a-chip” hardware | MIT News

Researchers in the emerging field of “neuromorphic computing” have attempted to design computer chips that work like the human brain. Instead of carrying out computations based on binary, on/off signaling, like digital chips do today, the elements of a “brain on a chip” would work in an analog fashion, exchanging a gradient of signals, or “weights,” much like neurons that activate in various ways depending on the type and number of ions that flow across a synapse.

In this way, small neuromorphic chips could, like the brain, efficiently process millions of streams of parallel computations that are currently only possible with large banks of supercomputers. But one significant hangup on the way to such portable artificial intelligence has been the neural synapse, which has been particularly tricky to reproduce in hardware.

Now engineers at MIT have designed an artificial synapse in such a way that they can precisely control the strength of an electric current flowing across it, similar to the way ions flow between neurons. The team has built a small chip with artificial synapses, made from silicon germanium. In simulations, the researchers found that the chip and its synapses could be used to recognize samples of handwriting, with 95 percent accuracy

and

Instead of using amorphous materials as an artificial synapse, Kim and his colleagues looked to single-crystalline silicon, a defect-free conducting material made from atoms arranged in a continuously ordered alignment. The team sought to create a precise, one-dimensional line defect, or dislocation, through the silicon, through which ions could predictably flow.

The researchers fabricated a neuromorphic chip consisting of artificial synapses made from silicon germanium, each synapse measuring about 25 nanometers across. They applied voltage to each synapse and found that all synapses exhibited more or less the same current, or flow of ions, with about a 4 percent variation between synapses — a much more uniform performance compared with synapses made from amorphous material.

They also tested a single synapse over multiple trials, applying the same voltage over 700 cycles, and found the synapse exhibited the same current, with just 1 percent variation from cycle to cycle.

Commercialization is very far away from this, but what we are talking here is building the foundation for artificial general intelligence (AGI), and before that, for narrow AI that can be embedded in clothes and everyday objects, not just in smartphones and other electronic devices.

Imagine the possibilities if an AI chip would be as cheap, small and ubiquitous as Bluetooth chips are today.

Undetectable mass social influence powered by artificial intelligence is an existential threat to humanity

Julian Assange on Twitter

The future of humanity is the struggle between humans that control machines and machines that control humans.
While the internet has brought about a revolution in our ability to educate each other, the consequent democratic explosion has shaken existing establishments to their core. Burgeoning digital super states such as Google, Facebook and their Chinese equivalents, who are integrated with the existing order, have moved to re-establish discourse control. This is not simply a corrective action. Undetectable mass social influence powered by artificial intelligence is an existential threat to humanity.
While still in its infancy, the geometric nature of this trend is clear. The phenomenon differs from traditional attempts to shape culture and politics by operating at a scale, speed, and increasingly at a subtlety, that appears highly likely to eclipse human counter-measures.
Nuclear war, climate change or global pandemics are existential threats that we can work through with discussion and thought. Discourse is humanity’s immune system for existential threats. Diseases that infect the immune system are usually fatal. In this case, at a planetary scale.

Self-doubting AI vs certain AI

From Google and Others Are Building AI Systems That Doubt Themselves – MIT Technology Review

Researchers at Uber and Google are working on modifications to the two most popular deep-learning frameworks that will enable them to handle probability. This will provide a way for the smartest AI programs to measure their confidence in a prediction or a decision—essentially, to know when they should doubt themselves.

and

“You would like a system that gives you a measure of how certain it is,” says Dustin Tran, who is working on this problem at Google. “If a self-driving car doesn’t know its level of uncertainty, it can make a fatal error, and that can be catastrophic.”

and

Goodman explains that giving deep learning the ability to handle probability can make it smarter in several ways. It could, for instance, help a program recognize things, with a reasonable degree of certainty, from just a few examples rather than many thousands. Offering a measure of certainty rather than a yes-or-no answer should also help with engineering complex systems.

Using Artificial Intelligence to augment human intelligence

From Using Artificial Intelligence to Augment Human Intelligence

in one common view of AI our computers will continue to get better at solving problems, but human beings will remain largely unchanged. In a second common view, human beings will be modified at the hardware level, perhaps directly through neural interfaces, or indirectly through whole brain emulation.

We’ve described a third view, in which AIs actually change humanity, helping us invent new cognitive technologies, which expand the range of human thought. Perhaps one day those cognitive technologies will, in turn, speed up the development of AI, in a virtuous feedback cycle

and

The interface-oriented work we’ve discussed is outside the narrative used to judge most existing work in artificial intelligence. It doesn’t involve beating some benchmark for a classification or regression problem. It doesn’t involve impressive feats like beating human champions at games such as Go. Rather, it involves a much more subjective and difficult-to-measure criterion: is it helping humans think and create in new ways?

This creates difficulties for doing this kind of work, particularly in a research setting. Where should one publish? What community does one belong to? What standards should be applied to judge such work? What distinguishes good work from bad?

A truly remarkable idea that would be infinitely more powerful if not buried under a wall of complexity, making it out of reach for very many readers.

This could be a seminal paper.

Limb reanimation through neuroscience and machine learning

From First paralysed person to be ‘reanimated’ offers neuroscience insights : Nature

A quadriplegic man who has become the first person to be implanted with technology that sends signals from the brain to muscles — allowing him to regain some movement in his right arm hand and wrist — is providing novel insights about how the brain reacts to injury.

Two years ago, 24-year-old Ian Burkhart from Dublin, Ohio, had a microchip implanted in his brain, which facilitates the ‘reanimation’ of his right hand, wrist and fingers when he is wired up to equipment in the laboratory.

and

Bouton and his colleagues took fMRI (functional magnetic resonance imaging) scans of Burkhart’s brain while he tried to mirror videos of hand movements. This identified a precise area of the motor cortex — the area of the brain that controls movement — linked to these movements. Surgery was then performed to implant a flexible chip that detects the pattern of electrical activity arising when Burkhart thinks about moving his hand, and relays it through a cable to a computer. Machine-learning algorithms then translate the signal into electrical messages, which are transmitted to a flexible sleeve that wraps around Burkhart’s right forearm and stimulates his muscles.

Burkhart is currently able to make isolated finger movements and perform six different wrist and hand motions, enabling him to, among other things, pick up a glass of water, and even play a guitar-based video game.

This story is one year and a half old, but I just found out about it and I think it’s a critical piece of the big picture that H+ is trying to narrate.

A growing number of artificial intelligence researchers focus on algorithmic bias

Kate Crawford, Distinguished Research Professor at New York University, a Principal Researcher at Microsoft Research New York, and a Visiting Professor at the MIT Media Lab, presented The Trouble with Bias at the NIPS 2017, the most influential and attended (over 8,000 people) conference on artificial intelligence.

Prof. Crawford is not the only one looking into algorithmic bias. As she shows in her presentation, a growing number of research papers focus on it, and even government agencies have started questioning how AI decisions are made.

Why do I talk about algorithmic bias so frequently on H+? Because in a future were AI augments human brain capabilities, through neural interfaces or other means, the algorithmic bias would manipulate people’s worldview in ways that mass media and politics can’t even dream about.

Before we merge human biology with technology we need to ask really difficult questions about how technology operates outside the body.

A task force to review New York City agencies’ use of algorithms and their bias

From New York City Takes on Algorithmic Discrimination | American Civil Liberties Union

The New York City Council yesterday passed legislation that we are hopeful will move us toward addressing these problems. New York City already uses algorithms to help with a broad range of tasks: deciding who stays in and who gets out of jail, teacher evaluations, firefighting, identifying serious pregnancy complications, and much more. The NYPD also previously used an algorithm-fueled software program developed by Palantir Technologies that takes arrest records, license-plate scans, and other data, and then graphs that data to supposedly help reveal connections between people and even crimes. The department since developed its own software to perform a similar task.

The bill, which is expected to be signed by Mayor Bill de Blasio, will provide a greater understanding of how the city’s agencies use algorithms to deliver services while increasing transparency around them. This bill is the first in the nation to acknowledge the need for transparency when governments use algorithms and to consider how to assess whether their use results in biased outcomes and how negative impacts can be remedied.

The legislation will create a task force to review New York City agencies’ use of algorithms and the policy issues they implicate. The task force will be made up of experts on transparency, fairness, and staff from non-profits that work with people most likely to be harmed by flawed algorithms. It will develop a set of recommendations addressing when and how algorithms should be made public, how to assess whether they are biased, and the impact of such bias.

Timely, as more and more AI researchers look into algorithmic bias.

Importance of Artificial Intelligence to Department of Defense

From Perspectives on Research in Artificial Intelligence and Artificial General Intelligence Relevant to DoD:

That AI and—if it were to advance significantly—AGI are of importance to DoD is so self-evident that it needs little elucidation here. Weapons systems and platforms with varying degrees of autonomy exist today in all domains of modern warfare, including air, sea (surface and underwater), and ground.

To cite a few out of many possible examples: Northrop Grumman’s X-47B is a strike fighter-sized unmanned aircraft, part of the U.S. Navy’s Unmanned Combat Air System (UCAS) Carrier Demonstration program. Currently undergoing flight testing, it is capable of aircraft carrier launch and recovery, as well as autonomous aerial refueling.4 DARPA’s Anti-Submarine Warfare Continuous Trail Unmanned Vessel (ACTUV) program recently commissioned the “Sea Hunter”, a 130 ft. unmanned trimaran optimized to robustly track quiet diesel electric submarines.
The Samsung SGR-A1 is a South Korean military robot sentry designed to replace human counterparts in the Korean demilitarized zone.
It is capable of challenging humans for a spoken password and, if it does not recognize the correct password in response, shooting them with either rubber bullets or lethal ammunition.

It is an important point that, while these systems have some degree of “autonomy” relying on the technologies of AI, they are in no sense a step—not even a small step—towards “autonomy” in the sense of AGI, that is, the ability to set independent goals or intent. Indeed, the word “autonomy” conflates two quite different meanings, one relating to “freedom of will or action” (like humans, or as in AGI), and the other the much more prosaic ability to act in accordance with a possibly complex rule set based on possibly complex sensor input, as in the word “automatic”. In using a terminology like “autonomous weapons”, the DoD may, as an unintended consequence, enhance the public’s confusion on this point.

and

At a higher strategic level, AI is recognized by DoD as a key enabling technology in a possible Third Offset Strategy.

As briefed to JASON, key elements of a Third Offset Strategy include:
(i) autonomous learning systems, e.g., in applications that require faster-than-human reaction times; (ii) human-machine collaborative decision making; (iii) assisted human operations, especially in combat; (iv) advanced strategies for collaboration between manned and unmanned platforms; and (v) network-enabled, autonomous weapons capable of operating in future cyber and electronic warfare environments. AI, as it is currently understood as a field of “6.1” basic research, will supply enabling technologies for all of these elements. At the same time, none of these elements are dependent on future advances in AGI.

JASON is an independent scientific advisory group that provides consulting services to the U.S. government on matters of defense science and technology. It was established in 1960.

JASON typically performs most of its work during an annual summer study, and has conducted studies under contract to the Department of Defense (frequently DARPA and the U.S. Navy), the Department of Energy, the U.S. Intelligence Community, and the FBI. Approximately half of the resulting JASON reports are unclassified.

Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings

From [1607.06520] Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings

The blind application of machine learning runs the risk of amplifying biases present in data. Such a danger is facing us with word embedding, a popular framework to represent text data as vectors which has been used in many machine learning and natural language processing tasks. We show that even word embeddings trained on Google News articles exhibit female/male gender stereotypes to a disturbing extent. This raises concerns because their widespread use, as we describe, often tends to amplify these biases. Geometrically, gender bias is first shown to be captured by a direction in the word embedding. Second, gender neutral words are shown to be linearly separable from gender definition words in the word embedding. Using these properties, we provide a methodology for modifying an embedding to remove gender stereotypes, such as the association between between the words receptionist and female, while maintaining desired associations such as between the words queen and female. We define metrics to quantify both direct and indirect gender biases in embeddings, and develop algorithms to “debias” the embedding. Using crowd-worker evaluation as well as standard benchmarks, we empirically demonstrate that our algorithms significantly reduce gender bias in embeddings while preserving the its useful properties such as the ability to cluster related concepts and to solve analogy tasks. The resulting embeddings can be used in applications without amplifying gender bias.

Our machines can very easily recognise you among at least 2 billion people in a matter of seconds

From Doctor, border guard, policeman – artificial intelligence in China and its mind-boggling potential to do right, or wrong | South China Morning Post

Yitu’s Dragonfly Eye generic portrait platform already has 1.8 billion photographs to work with: those logged in the national database and you, if you have visited China recently. Yitu will not say whether Hong Kong identity card holders have been logged in the government’s database, for which the company provides navigation software and algor­ithms, but 320 million of the photos have come from China’s borders, including ports and airports, where pictures are taken of everyone who enters and leaves the country.

According to Yitu, its platform is also in service with more than 20 provincial public security departments, and is used as part of more than 150 municipal public security systems across the country, and Dragonfly Eye has already proved its worth. On its very first day of operation on the Shanghai Metro, in January, the system identified a wanted man when he entered a station. After matching his face against the database, Dragonfly Eye sent his photo to a policeman, who made an arrest. In the following three months, 567 suspected lawbreakers were caught on the city’s underground network.

Imagine this performed by a human eye augmented by AR lenses or glasses.

If you think that humans will confine this sort of applications to a computer at your desk or inside your pocket, you are delusional.

% Chinese researchers contribution to best 100 AI journals/conferences

The Eurasia Group and Sinovation Ventures released a report titled China embraces AI: A Close Look and A Long View with some interesting data.

The first bit is a chart that shows how the percentage of Chinese researchers contribution to best 100 AI journals/conferences raised from 23% / 25% (authoring/citations) in 2006 to almost 43% / 56% (authoring/citations) in 2015.

The second bit is a list of Chinese AI startups, divided into research/enabling technology/commercial application categories, which also highlights domestic and foreign investors.

With the massive commitment of the Chinese government, these numbers are bound to grow significantly.

I always wondered how it would be if a superior species landed on earth and showed us how they play chess

From Google’s AlphaZero Destroys Stockfish In 100-Game Match – Chess.com

Stockfish, which for most top players is their go-to preparation tool, and which won the 2016 TCEC Championship and the 2017 Chess.com Computer Chess Championship, didn’t stand a chance. AlphaZero won the closed-door, 100-game match with 28 wins, 72 draws, and zero losses.

Oh, and it took AlphaZero only four hours to “learn” chess.

and

“We have always assumed that chess required too much empirical knowledge for a machine to play so well from scratch, with no human knowledge added at all,” Kasparov said. “Of course I’ll be fascinated to see what we can learn about chess from AlphaZero, since that is the great promise of machine learning in general—machines figuring out rules that humans cannot detect. But obviously the implications are wonderful far beyond chess and other games. The ability of a machine to replicate and surpass centuries of human knowledge in complex closed systems is a world-changing tool.”

The progress that DeepMind, and the industry in general, is making in artificial intelligence is breathtaking. Eventually, this feeling of confronting a superior species will become more and more frequent.

The notion of being, for the first time ever, the inferior species is terrifying for most humans. It implies that somebody else can do to us what we do to animals on daily basis. Homo Deus, Yuval Noah Harari new bestseller, drives you to that realization in an amazing way. I can’t recommend it enough.

Google AutoML generates its first outperforming AI child

From Google’s Artificial Intelligence Built an AI That Outperforms Any Made by Humans

In May 2017, researchers at Google Brain announced the creation of AutoML, an artificial intelligence (AI) that’s capable of generating its own AIs. More recently, they decided to present AutoML with its biggest challenge to date, and the AI that can build AI created a “child” that outperformed all of its human-made counterparts.

AutoML acts as a controller neural network that develops a child AI network for a specific task. For this particular child AI, which the researchers called NASNet, the task was recognizing objects — people, cars, traffic lights, handbags, backpacks, etc. — in a video in real-time.

and

NASNet was 82.7 percent accurate at predicting images on ImageNet’s validation set. This is 1.2 percent better than any previously published results, and the system is also 4 percent more efficient, with a 43.1 percent mean Average Precision (mAP)

and

The Google researchers acknowledge that NASNet could prove useful for a wide range of applications and have open-sourced the AI for inference on image classification and object detection. “We hope that the larger machine learning community will be able to build on these models to address multitudes of computer vision problems we have not yet imagined,” they wrote in their blog post.

Though the applications for NASNet and AutoML are plentiful, the creation of an AI that can build AI does raise some concerns. For instance, what’s to prevent the parent from passing down unwanted biases to its child? What if AutoML creates systems so fast that society can’t keep up?

We are waiting to develop a human-level artificial intelligence and see if it will improve itself to the point of becoming a superintelligence. Maybe it’s exceptionally close.

We are entering a cycle where humans and algorithms are adapting to each other

From Exploring Cooperation with Social Machines:

Humans are filling in the gaps where algorithms cannot easily function, and algorithms are calculating and processing complex information at a speed that for most humans is not possible. Together, humans and computers are sorting out which is going to do what type of task. It is a slow and tedious process that emulates a kind of sociability between entities in order to form cooperative outcomes.

Either one or both parties must yield a bit for cooperation to work, and if a program is developed in a rigid way, the yielding is usually done by the human to varying degrees of frustration as agency (our ability to make choices from a range of options) becomes constrained by the process of automation.

Indeed, sociability and social relationships depend on the assumption of agency on the part of the other, human or machine. Humans often attribute agency to machines in their assumptions underlying how the machine will satisfy their present need, or indeed inhibit them from satisfying a need.

You should also read Implementing Algorithms In The Form Of Scripts Has Been An Early Step In Training Humans To Be More Like Machines

Implementing algorithms in the form of scripts has been an early step in training humans to be more like machines

From Cooperating with Algorithms in the Workplace:

Thus, concerning algorithms at work, people are either replaced by them, required to help them, or have become them. Workplace algorithms have been evolving for some time in the form of scripts and processes that employers have put in place for efficiency, “quality control,” brand consistency, product consistency, experience consistency and most particularly, cost savings. As a result phone calls to services such as hotels, shops and restaurants, may now have a script read out loud or memorized by the employee to the customer to ensure consistent experiences and task compliance.

Consistency of experience is increasingly a goal within organizations, and implementing algorithms in the form of scripts and processes has been an early step in training humans to be more like machines. Unfortunately, these algorithms can result in an inability to cooperate in contexts not addressed by the algorithm. These scripts and corresponding processes purposely greatly restrict human agency by failing to define clear boundaries for the domain of the algorithm and recognizing the need for adaptation outside these boundaries.

Thus, often if a worker is asked a specialized or specific query, they lack the ability to respond to it and will either turn away the customer, or accelerate the query up (and down) a supervisory management chain, with each link bound by its own scripts, processes and rules, which may result in a non-answer or non-resolution for the customer.

Not only the paper is mighty interesting, but the whole body of research it belongs too is worth serious investigation.

Also, this TED Talk by David Lee touches the topic in quite an interesting way: Why jobs of the future won’t feel like work

What is consciousness, and could machines have it?

From What is consciousness, and could machines have it? | Science

The controversial question of whether machines may ever be conscious must be based on a careful consideration of how consciousness arises in the only physical system that undoubtedly possesses it: the human brain. We suggest that the word “consciousness” conflates two different types of information-processing computations in the brain: the selection of information for global broadcasting, thus making it flexibly available for computation and report (C1, consciousness in the first sense), and the self-monitoring of those computations, leading to a subjective sense of certainty or error (C2, consciousness in the second sense). We argue that despite their recent successes, current machines are still mostly implementing computations that reflect unconscious processing (C0) in the human brain. We review the psychological and neural science of unconscious (C0) and conscious computations (C1 and C2) and outline how they may inspire novel machine architectures.

We no longer know if we’re seeing the same information or what anybody else is seeing

From Zeynep Tufekci: We’re building a dystopia just to make people click on ads | TED.com

As a public and as citizens, we no longer know if we’re seeing the same information or what anybody else is seeing, and without a common basis of information, little by little, public debate is becoming impossible, and we’re just at the beginning stages of this.

and

What if the system that we do not understand was picking up that it’s easier to sell Vegas tickets to people who are bipolar and about to enter the manic phase. Such people tend to become overspenders, compulsive gamblers. They could do this, and you’d have no clue that’s what they were picking up on. I gave this example to a bunch of computer scientists once and afterwards, one of them came up to me. He was troubled and he said, “That’s why I couldn’t publish it.” I was like, “Couldn’t publish what?” He had tried to see whether you can indeed figure out the onset of mania from social media posts before clinical symptoms, and it had worked, and it had worked very well, and he had no idea how it worked or what it was picking up on.

and

Now, don’t get me wrong, we use digital platforms because they provide us with great value. I use Facebook to keep in touch with friends and family around the world. I’ve written about how crucial social media is for social movements. I have studied how these technologies can be used to circumvent censorship around the world. But it’s not that the people who run, you know, Facebook or Google are maliciously and deliberately trying to make the country or the world more polarized and encourage extremism. I read the many well-intentioned statements that these people put out. But it’s not the intent or the statements people in technology make that matter, it’s the structures and business models they’re building. And that’s the core of the problem. Either Facebook is a giant con of half a trillion dollars and ads don’t work on the site, it doesn’t work as a persuasion architecture, or its power of influence is of great concern. It’s either one or the other. It’s similar for Google, too.

Longer than usual (23 min) TED talk, but worth it.

I, too, believe that there’s no malicious intent behind the increasingly capable AI we see these days. Quite the opposite, I believe that most people working at Google or Facebook are there to make a positive impact, to change the world for the better. The problem is, on top of the business model, the fact that a lot of people, even the most brilliant ones, don’t take the time to ponder the long-term consequences of the things they are building in the way they are building them today.

The minimum dataset scale for deep learning

From Google Brain chief: Deep learning takes at least 100,000 examples | VentureBeat

“I would say pretty much any business that has tens or hundreds of thousands of customer interactions has enough scale to start thinking about using these sorts of things,” Jeff Dean, a senior fellow at Google, said in an onstage interview at the VB Summit in Berkeley, California. “If you only have 10 examples of something, it’s going to be hard to make deep learning work. If you have 100,000 things you care about, records or whatever, that’s the kind of scale where you should really start thinking about these kinds of techniques.”

The dangerous rush to build AI expertise

From Lyft’s biggest AI challenge is getting engineers up to speed | VentureBeat

Machine learning and deep learning AI have gone from the niche realm of PhDs to tools that will be used throughout all types of companies. That equates to a big skills gap, says Gil Arditi, product lead for Lyft’s Machine Learning Platform.

and

Today, of course, any engineer with a modicum of experience can spin up databases on user-friendly cloud services. That’s the path that AI processes have to travel, he says. Luckily, machine learning is making AI more accessible to newbies without a PhD in statistics, mathematics, or computer science.

“Part of the promise of machine learning in general but deep learning in particular … is that there actually is not a lot of statistical modeling,” said Arditi. “Instead of giving to the machines exact formulas that will address the problem, you just give it the tools and treat it like a black box.”

From LinkedIn plans to teach all its engineers the basics of using AI | VentureBeat

Today, of course, any engineer with a modicum of experience can spin up databases on user-friendly cloud services. That’s the path that AI processes have to travel, he says. Luckily, machine learning is making AI more accessible to newbies without a PhD in statistics, mathematics, or computer science.

“Part of the promise of machine learning in general but deep learning in particular … is that there actually is not a lot of statistical modeling,” said Arditi. “Instead of giving to the machines exact formulas that will address the problem, you just give it the tools and treat it like a black box.”

and

The academy isn’t designed to give engineers an academic grounding in machine learning as a discipline. It’s designed instead to prepare them for using AI in much the same way that they’d use a system like QuickSort, an algorithm for sorting data that’s fed into it. Users don’t have to understand how the underlying system works, they just need to know the right way to implement it.

That’s the goal for LinkedIn, Agarwal said. Thus far, six engineers have made it through the AI academy and are deploying machine learning models in production as a result of what they learned. The educational program still has a ways to go (Agarwal said he’d grade it about a “C+ at the moment) but it has the potential to drastically affect LinkedIn’s business.

From Tech Giants Are Paying Huge Salaries for Scarce A.I. Talent – The New York Times

Typical A.I. specialists, including both Ph.D.s fresh out of school and people with less education and just a few years of experience, can be paid from $300,000 to $500,000 a year or more in salary and company stock, according to nine people who work for major tech companies or have entertained job offers from them. All of them requested anonymity because they did not want to damage their professional prospects.

Well-known names in the A.I. field have received compensation in salary and shares in a company’s stock that total single- or double-digit millions over a four- or five-year period. And at some point they renew or negotiate a new contract, much like a professional athlete

and

Most of all, there is a shortage of talent, and the big companies are trying to land as much of it as they can. Solving tough A.I. problems is not like building the flavor-of-the-month smartphone app. In the entire world, fewer than 10,000 people have the skills necessary to tackle serious artificial intelligence research, according to Element AI, an independent lab in Montreal.

Two thoughts:

  • This is unprecedented in the last two decades. Not even the raise of virtualization or cloud computing triggered such a massive call to action.
  • Do you really think that all these education programs and all these rushed experts will spend any significant time on the ethical aspects of AI and long-term implications of algorithmic bias?

NATO calls for a specialists meeting about artificial intelligence in mid 2018

From NATO urged to rapidly absorb AI into its command and control | Jane’s 360

NATO advisers and industry are urging the allies to rapidly absorb artificial intelligence software into their militaries’ observe, orient, decide, act (OODA) loop or risk seeing the latter collapse in the face of adversaries’ increasingly sophisticated artificial intelligence (AI)-enabled attacks.

NATO Information Systems Technology (IST) Panel Office already arranged a 150 people meeting in Bordeaux for end of May 2018:

In order to avoid an abstract scientific discussion, the national STB representatives will engage operational experts to participate and work with the scientists towards a common road map for future research activities in NATO that meet operational needs.

Within the OODA loop the first step ‘Observe’ is about harvesting data. Intelligent integration of heterogeneous devices, architectures of acquisition systems and sensors, decentralized management of data, and autonomous collection platforms and sensors give a huge field for improvement with Natural Language Processing and Artificial Intelligence technologies for acquiring and processing Big Data. The next step ‘Orient’ is about reasoning. Analysis of social media, information fusion, anomaly detection, and behavior modeling are domains with huge potential for Machine Learning algorithms. The same is applicable for the ‘Decide’ step where predictive analytics, augmented and virtual reality and many more technologies support the operational decision-making process. A complex battlefield and high speed operations require independently acting devices to ‘Act’ with a certain degree of Autonomy. In all steps, the application of AI technologies for automated analysis, early warnings, guaranteeing trust in the Internet of Things (IoT), and distinguishing relevant from Fake Data is mandatory.

This is the escalation Nick Bostrom first (in its book Superintelligence) and Elon Musk later were talking about.

AVA: A Video Dataset of Spatio-temporally Localized Atomic Visual Actions

From [1705.08421] AVA: A Video Dataset of Spatio-temporally Localized Atomic Visual Actions

This paper introduces a video dataset of spatio-temporally localized Atomic Visual Actions (AVA). The AVA dataset densely annotates 80 atomic visual actions in 57.6k movie clips with actions localized in space and time, resulting in 210k action labels with multiple labels per human occurring frequently. The main differences with existing video datasets are: the definition of atomic visual actions, which avoids collecting data for each and every complex action; precise spatio-temporal annotations with possibly multiple annotations for each human; the use of diverse, realistic video material (movies). This departs from existing datasets for spatio-temporal action recognition, such as JHMDB and UCF datasets, which provide annotations for at most 24 composite actions, such as basketball dunk, captured in specific environments, i.e., basketball court.
We implement a state-of-the-art approach for action localization. Despite this, the performance on our dataset remains low and underscores the need for developing new approaches for video understanding. The AVA dataset is the first step in this direction, and enables the measurement of performance and progress in realistic scenarios.

Google confirms it’s using YouTube to teach AI about human actions

From Google built a dataset to teach its artificial intelligence how humans hug, cook, and fight — Quartz

Google, which owns YouTube, announced on Oct. 19 a new dataset of film clips, designed to teach machines how humans move in the world. Called AVA, or “atomic visual actions,” the videos aren’t anything special to human eyes—they’re three second clips of people drinking water and cooking curated from YouTube. But each clip is bundled with a file that outlines the person that a machine learning algorithm should watch, as well as a description of their pose, and whether they’re interacting with another human or object. It’s the digital version of pointing at a dog with a child and coaching them by saying, “dog.”

and

This technology could help Google to analyze the years of video it processes on YouTube every day. It could be applied to better target advertising based on whether you’re watching a video of people talk or fight, or in content moderation. The eventual goal is to teach computers social visual intelligence, the authors write in an accompanying research paper, which means “understanding what humans are doing, what might they do next, and what they are trying to achieve.”

Google’s video dataset is free.

In 2015, I speculated on Twitter:

I wonder if @google already has enough @youtube videos to create a video version of Wikipedia (and if they already are machine learning it)

We want the UAE to become the world’s most prepared country for artificial intelligence 

From Mohammad Bin Rashid reveals reshuffled UAE Cabinet | Gulfnews.com

The new government – the 13th in the UAE’s history – sees the appointment of Omar Bin Sultan Al Olama (right), 27, as the State Minister for Artificial Intelligence.

“We want the UAE to become the world’s most prepared country for artificial intelligence,” Shaikh Mohammad said.

Shaikh Mohammad added the new phase focuses on “future skills, future sciences and future technology, as we prepare for the centenary to ensure a better future for our generations”.

After Russia and China, the United Arab Emirates wants to make clear, too, that AI is a strategic advantage and a top priority.

Mastering the game of Go without human knowledge

From Mastering the game of Go without human knowledge : Nature

A long-standing goal of artificial intelligence is an algorithm that learns, tabula rasa, superhuman proficiency in challenging domains. Recently, AlphaGo became the first program to defeat a world champion in the game of Go. The tree search in AlphaGo evaluated positions and selected moves using deep neural networks. These neural networks were trained by supervised learning from human expert moves, and by reinforcement learning from self-play. Here we introduce an algorithm based solely on reinforcement learning, without human data, guidance or domain knowledge beyond game rules. AlphaGo becomes its own teacher: a neural network is trained to predict AlphaGo’s own move selections and also the winner of AlphaGo’s games. This neural network improves the strength of the tree search, resulting in higher quality move selection and stronger self-play in the next iteration. Starting tabula rasa, our new program AlphaGo Zero achieved superhuman performance, winning 100–0 against the previously published, champion-defeating AlphaGo.

Accumulating thousands of years of human knowledge during a period of just a few days

From AlphaGo Zero: Learning from scratch | DeepMind

It is able to do this by using a novel form of reinforcement learning, in which AlphaGo Zero becomes its own teacher. The system starts off with a neural network that knows nothing about the game of Go. It then plays games against itself, by combining this neural network with a powerful search algorithm. As it plays, the neural network is tuned and updated to predict moves, as well as the eventual winner of the games.

This updated neural network is then recombined with the search algorithm to create a new, stronger version of AlphaGo Zero, and the process begins again. In each iteration, the performance of the system improves by a small amount, and the quality of the self-play games increases, leading to more and more accurate neural networks and ever stronger versions of AlphaGo Zero.

and

After just three days of self-play training, AlphaGo Zero emphatically defeated the previously published version of AlphaGo – which had itself defeated 18-time world champion Lee Sedol – by 100 games to 0. After 40 days of self training, AlphaGo Zero became even stronger, outperforming the version of AlphaGo known as “Master”, which has defeated the world’s best players and world number one Ke Jie.

The new AlphaGo Zero is not impressive just because it uses no data set to become the world leader at what it does. It’s impressive also because it achieves the goal at a pace no human will ever be able to match.

OpenAI achieves Continuous Agent Adaptation via Meta-Learning

From Adaptation via Meta-Learning

We’ve evolved a population of 1050 agents of different anatomies (Ant, Bug, Spider), policies (MLP, LSTM), and adaptation strategies (PPO-tracking, RL^2, meta-updates) for 10 epochs. Initially, we had an equal number of agents of each type. Every epoch, we randomly matched 1000 pairs of agents and made them compete and adapt in multi-round games against each other. The agents that lost disappeared from the population, while the winners replicated themselves.

Summary: After a few epochs of evolution, Spiders, being the weakest, disappeared, the subpopulation of Bugs more than doubled, the Ants stayed the same. Importantly, the agents with meta-learned adaptation strategies end up dominating the population.

OpenAI has developed a “learning to learn” (or meta-learning) framework that allows an AI agent to continuously adapt to a dynamic environment, at least in certain conditions. The environment is dynamic for a number of reasons, including the fact that opponents are learning as well.

AI agents equipped with the meta-learning framework win more fights against their opponents and eventually dominate the environment. Be sure to watch the last video to see the effect.

The meta-learning framework gives the selected AI agents the capability to predict and anticipate the changes in the environment and adapt faster than the AI agents that only learn from direct experience.

We know that the neocortex is a prediction machine and that human intelligence amounts to the capability to anticipate and adapt. This research is a key step towards artificial general intelligence.

Continuous Adaptation via Meta-Learning in Nonstationary and Competitive Environments

From [1710.03641] Continuous Adaptation via Meta-Learning in Nonstationary and Competitive Environments

Ability to continuously learn and adapt from limited experience in nonstationary environments is an important milestone on the path towards general intelligence. In this paper, we cast the problem of continuous adaptation into the learning-to-learn framework. We develop a simple gradient-based meta-learning algorithm suitable for adaptation in dynamically changing and adversarial scenarios. Additionally, we design a new multi-agent competitive environment, RoboSumo, and define iterated adaptation games for testing various aspects of continuous adaptation strategies. We demonstrate that meta-learning enables significantly more efficient adaptation than reactive baselines in the few-shot regime. Our experiments with a population of agents that learn and compete suggest that meta-learners are the fittest.

“Be careful; things can be worse than they appear”: Understanding Biased Algorithms and Users’ Behavior around Them in Rating Platforms

From http://social.cs.uiuc.edu/papers/ICWSM17-PrePrint.pdf

Awareness of bias in algorithms is growing among scholars and users of algorithmic systems. But what can we observe about how users discover and behave around such biases?
We used a cross-platform audit technique that analyzed online ratings of 803 hotels across three hotel rating platforms and found that one site’s algorithmic rating system biased ratings, particularly low-to-medium quality hotels, significantly higher than others (up to 37%).

Analyzing reviews of 162 users who independently discovered this bias, we seek to understand if, how, and in what ways users perceive and manage this bias. Users changed the typical ways they used a review on a hotel rating platform to instead discuss the rating system itself and raise other users’ awareness of the rating bias. This raising of awareness included practices like efforts to reverse engineer the rating algorithm, efforts to correct the bias, and demonstrations of broken trust.

We conclude with a discussion of how such behavior patterns might inform design approaches that anticipate unexpected bias and provide reliable means for meaningful bias discovery and response.

The hippocampus as a predictive map

From The hippocampus as a predictive map : Nature

A cognitive map has long been the dominant metaphor for hippocampal function, embracing the idea that place cells encode a geometric representation of space. However, evidence for predictive coding, reward sensitivity and policy dependence in place cells suggests that the representation is not purely spatial. We approach this puzzle from a reinforcement learning perspective: what kind of spatial representation is most useful for maximizing future reward? We show that the answer takes the form of a predictive representation. This representation captures many aspects of place cell responses that fall outside the traditional view of a cognitive map. Furthermore, we argue that entorhinal grid cells encode a low-dimensionality basis set for the predictive representation, useful for suppressing noise in predictions and extracting multiscale structure for hierarchical planning.

DeepMind thinks that the hippocampus summarizes future events using a “predictive map”

From The hippocampus as a ‘predictive map’ | DeepMind

Our insights were derived from reinforcement learning, the subdiscipline of AI research that focuses on systems that learn by trial and error. The key computational idea we drew on is that to estimate future reward, an agent must first estimate how much immediate reward it expects to receive in each state, and then weight this expected reward by how often it expects to visit that state in the future. By summing up this weighted reward across all possible states, the agent obtains an estimate of future reward.

Similarly, we argue that the hippocampus represents every situation – or state – in terms of the future states which it predicts. For example, if you are leaving work (your current state) your hippocampus might represent this by predicting that you will likely soon be on your commute, picking up your kids from school or, more distantly, at home. By representing each current state in terms of its anticipated successor states, the hippocampus conveys a compact summary of future events, known formally as the “successor representation”. We suggest that this specific form of predictive map allows the brain to adapt rapidly in environments with changing rewards, but without having to run expensive simulations of the future.

I wonder what Jeff Hawkins thinks about this new theory.

Why Does the Neocortex Have Columns, A Theory of Learning the Structure of the World

From Why Does the Neocortex Have Columns, A Theory of Learning the Structure of the World | bioRxiv

Neocortical regions are organized into columns and layers. Connections between layers run mostly perpendicular to the surface suggesting a columnar functional organization. Some layers have long-range excitatory lateral connections suggesting interactions between columns. Similar patterns of connectivity exist in all regions but their exact role remain a mystery. In this paper, we propose a network model composed of columns and layers that performs robust object learning and recognition. Each column integrates its changing input over time to learn complete predictive models of observed objects. Excitatory lateral connections across columns allow the network to more rapidly infer objects based on the partial knowledge of adjacent columns. Because columns integrate input over time and space, the network learns models of complex objects that extend well beyond the receptive field of individual cells. Our network model introduces a new feature to cortical columns. We propose that a representation of location relative to the object being sensed is calculated within the sub-granular layers of each column. The location signal is provided as an input to the network, where it is combined with sensory data. Our model contains two layers and one or more columns. Simulations show that using Hebbian-like learning rules small single-column networks can learn to recognize hundreds of objects, with each object containing tens of features. Multi-column networks recognize objects with significantly fewer movements of the sensory receptors. Given the ubiquity of columnar and laminar connectivity patterns throughout the neocortex, we propose that columns and regions have more powerful recognition and modeling capabilities than previously assumed.

Not on artificial intelligence per se, but Jeff Hawkins was the first to suggest a unifying theory of how the brain works in 2005 with his book On Intelligence. It’s interesting to see how the theory has been refined in last 12 years and how it might influence today’s development of AI algorithms.

Chinese state plan to dominate AI by 2030

From China’s Plan to ‘Lead’ in AI: Purpose, Prospects, and Problems

The plan prescribes a high level of government investment in theoretical and applied AI breakthroughs (see Part III below for more), while also acknowledging that, in China as around the world, private companies are currently leading the charge on commercial applications of AI.

The plan acknowledges, meanwhile, that China remains far behind world leaders in development of key hardware enablers of AI, such as microchips suited for machine learning use (e.g., GPUs or re-configurable processors). The plan’s ambition is underlined by its recognition of the hard road ahead.

and

China is embarking upon an agenda of “intelligentization” (智能化), seeking to take advantage of the transformative potential of AI throughout society, the economy, government, and the military. Through this new plan, China intends to pursue “indigenous innovation” in the “strategic frontier” technology of AI in furtherance of a national strategy for innovation-driven development.

the Chinese government is encouraging its own AI enterprises to pursue an approach of “going out,” including through overseas mergers and acquisitions, equity investments, and venture capital, as well as the establishment of research and development centers abroad.

China plans to develop resources and ecosystems conducive to the goal of becoming a “premier innovation center” in AI science and technology by 2030. In support of this goal, the plan calls for an “open source and open” approach that takes advantage of synergies among industry, academia, research, and applications, including through creating AI “innovation clusters.”

and

the Chinese leadership wants to ensure that advances in AI can be leveraged for national defense, through a national strategy for military-civil fusion (军民融合). According to the plan, resources and advances will be shared and transferred between civilian and military contexts. This will involve the establishment and normalizing of mechanisms for communication and coordination among scientific research institutes, universities, enterprises, and military industry.

Full translation of China’s State Council Notice on the Issuance of the Next Generation Artificial Intelligence Development Plan – Both the original document and the commentary on NewAmerica are critical reads.

Startup generates and sells synthetic data for AI training

From Home – Neuromation

We propose a solution whose accuracy is guaranteed by construction: synthesizing large datasets along with perfectly accurate labels. The benefits of synthetic data are manifold. It is fast to synthesize and render, perfectly accurate, tailored for the task at hand, and can be modified to improve the model and training itself. It is important to note that real data with accurate labels is still required for evaluating models trained on synthetic data, in order to guarantee acceptable performance at inference time. However, the amount of validation data required is orders of magnitude smaller than training data!

They generate and sell synthetic datasets for AI training. All data is charged per item, and comes pre-labelled.

All transactions get done using Ethereum extended ECR-20 compliant token. People can mine tokens by performing computationally intensive tasks of data generation and model training instead of mining cryptocurrency.

Nick Bostrom joins newly formed Ethics & Society research group at DeepMind

From DeepMind launches new research team to investigate AI ethics – The Verge

Google’s AI subsidiary DeepMind is getting serious about ethics. The UK-based company, which Google bought in 2014, today announced the formation of a new research group dedicated to the thorniest issues in artificial intelligence. These include the problems of managing AI bias; the coming economic impact of automation; and the need to ensure that any intelligent systems we develop share our ethical and moral values.

DeepMind Ethics & Society (or DMES, as the new team has been christened) will publish research on these topics and others starting early 2018. The group has eight full-time staffers at the moment, but DeepMind wants to grow this to around 25 in a year’s time. The team has six unpaid external “fellows” (including Oxford philosopher Nick Bostrom, who literally wrote the book on AI existential risk) and will partner with academic groups conducting similar research, including The AI Now Institute at NYU, and the Leverhulme Centre for the Future of Intelligence.

Great effort. I’d love to attend a conference arranged by groups like this one.

Information Bottleneck Theory might explain how deep (and human) learning works

From New Theory Cracks Open the Black Box of Deep Learning | Quanta Magazine

Experts wonder what it is about deep learning that enables generalization — and to what extent brains apprehend reality in the same way.

Last month, a YouTube video of a conference talk in Berlin, shared widely among artificial-intelligence researchers, offered a possible answer. In the talk, Naftali Tishby, a computer scientist and neuroscientist from the Hebrew University of Jerusalem, presented evidence in support of a new theory explaining how deep learning works. Tishby argues that deep neural networks learn according to a procedure called the “information bottleneck,” which he and two collaborators first described in purely theoretical terms in 1999. The idea is that a network rids noisy input data of extraneous details as if by squeezing the information through a bottleneck, retaining only the features most relevant to general concepts.

and

According to Tishby, who views the information bottleneck as a fundamental principle behind learning, whether you’re an algorithm, a housefly, a conscious being, or a physics calculation of emergent behavior, that long-awaited answer “is that the most important part of learning is actually forgetting.”

but

Brenden Lake, an assistant professor of psychology and data science at New York University who studies similarities and differences in how humans and machines learn, said that Tishby’s findings represent “an important step towards opening the black box of neural networks,” but he stressed that the brain represents a much bigger, blacker black box. Our adult brains, which boast several hundred trillion connections between 86 billion neurons, in all likelihood employ a bag of tricks to enhance generalization, going beyond the basic image- and sound-recognition learning procedures that occur during infancy and that may in many ways resemble deep learning.

For instance, Lake said the fitting and compression phases that Tishby identified don’t seem to have analogues in the way children learn handwritten characters, which he studies. Children don’t need to see thousands of examples of a character and compress their mental representation over an extended period of time before they’re able to recognize other instances of that letter and write it themselves. In fact, they can learn from a single example.

The video is here.

Top academic and industry minds in a panel about the future of AI

From Superintelligence: Science or Fiction? | Elon Musk & Other Great Minds – YouTube

From left to right: Elon Musk (Tesla, SpaceX), Stuart Russell (University Berkeley), Bart Selman (Cornell University), Ray Kurzweil (Google, inventor, futurist), David Chalmers (New York University, Australian National University, philosopher), Nick Bostrom (University of Oxford, philosopher), Demis Hassabis (DeepMind), Sam Harris (author, philosopher, neuroscientist, atheist), and Jaan Tallinn (Skype, Kaaza) discuss with Max Tegmark (moderator) what likely outcomes might be if we succeed in building human-level AGI, and also what we would like to happen.

Max Tegmark put in a room some of the brightest minds of our times to discuss Artificial General Intelligence and Superintelligence. This is the video of the most significative panel at that event, the Beneficial AI 2017 conference.

It’s 1h video, totally worth your time.

On Cognitive Computing vs Artificial Intelligence

From Ginni Rometty on Artificial Intelligence – Bloomberg

Doctors don’t want black-and-white answers, nor does any profession. If you’re a professional, my guess is when you interact with AI, you don’t want it to say, “Here is an answer.” What a doctor wants is, “OK, give me the possible answers. Tell my why you believe it. Can I see the research, the evidence, the ‘percent confident’? What more would you like to know?”

and

When I went to Davos in January, we published something called Transparency and Trust in the Cognitive Era. It’s our responsibility if we build this stuff to guide it safely into the world. First, be clear on the purpose, work with man. We aren’t out here to destroy man. The second is to be transparent about who trained the computers, who are the experts, where did the data come from. And when consumers are using AI, you inform them that they are and inform the company as well that owns the intellectual property. And the third thing is to be committed to skill.

IBM and its term “cognitive computing” are all about so-called “weak AI”. The problem is that giving the insight about an answer is incredibly challenging at the moment vs just giving the answer in a black-box fashion.

The electromagnetic spectrum is now the new high ground on the battlefield

From Artificial Intelligence Could Help Neutralize Enemy Bombs

Capt. Scott Kraft, commanding officer at the Naval Surface Warfare Center Indian Head technology division in Maryland, said artificial intelligence and big data analytics could potentially help technicians more quickly recognize exactly what type of bomb they are dealing with and choose the best option for neutralizing it. The vast amount of data collected during the past 16 years of war could be exploited to make faster decisions in combat situations, he said.

and

AI could also help EOD forces defeat electronic warfare threats by detecting sources of transmission and interference, officials said.

“The electromagnetic spectrum is now the new high ground on the battlefield,” Young said. U.S. troops “have to have situational awareness of it, what’s happening and why, and if we don’t we’re going to be at a disadvantage.”

Signals interference can impede the operations of robots and other EOD tools.

“If you’ve been to theater lately … you’ve heard about a lot of the counter-UAS systems along with all the jammers, along with all the electronic warfare systems,” Young said.

“It becomes very complex. So we want to try to simplify that” for operators that aren’t EW experts, Young said.

The whole article is about artificial intelligence and drone technologies applied to explosive ordnance disposal. However, reading it, it’s easy to see how AI is considered a strategic weapon and could be used for many applications, not just improvised explosive device (IED) discovery and disposal. And some military organizations have very large data sets to train AI.

The possible applications go all the way to the supersoldier scenarios, as I heard from at least one startup.

No surprise Putin said that whoever leads in AI will rule the world.

Real-time people and object recognition for check-out at a retail shop

From Autonomous Checkout, Real Time System v0.21 – YouTube

This is a real time demonstration of our autonomous checkout system, running at 30 FPS. This system includes our models for person detection, entity tracking, item detection, item classification, ownership resolution, action analysis, and shopper inventory analysis, all working together to visualize which person has what item in real time.

A few days ago, I shared a TED Talk about real-time face recognition. It was impressive. What I am sharing right now is even more impressive: real-time people and object recognition during online shopping.

Online shopping is just one (very lucrative) application. The technology shown in this video has been developed by a company called Standard Cognition, but it’s very likely similar to the one that Amazon is testing in their first retail shop.

Of course, there are many other applications, like surveillance for law enforcement, or information gathering for “smart communication”. Imagine this technology used in augmented reality.

Once smart contact lenses will be a reality, this will be inevitable.

AI can correctly distinguish between gay and heterosexual men in 81% of cases, and in 74% of cases for women

From Deep neural networks are more accurate than humans at detecting sexual orientation from facial images | PsyArXiv Preprints

We show that faces contain much more information about sexual orientation than can be perceived and interpreted by the human brain. We used deep neural networks to extract features from 35,326 facial images. These features were entered into a logistic regression aimed at classifying sexual orientation.

Given a single facial image, a classifier could correctly distinguish between gay and heterosexual men in 81% of cases, and in 74% of cases for women. Human judges achieved much lower accuracy: 61% for men and 54% for women. The accuracy of the algorithm increased to 91% and 83%, respectively, given five facial images per person. Facial features employed by the classifier included both fixed (e.g., nose shape) and transient facial features (e.g., grooming style).

Consistent with the prenatal hormone theory of sexual orientation, gay men and women tended to have gender-atypical facial morphology, expression, and grooming styles. Prediction models aimed at gender alone allowed for detecting gay males with 57% accuracy and gay females with 58% accuracy.

Those findings advance our understanding of the origins of sexual orientation and the limits of human perception. Additionally, given that companies and governments are increasingly using computer vision algorithms to detect people’s intimate traits, our findings expose a threat to the privacy and safety of gay men and women.

Let me reiterate: The accuracy of the algorithm increased to 91% and 83%, respectively, given five facial images per person.

Imagine if this analysis would be incorporated into the hiring process and used to discriminate candidates.

I think that the algorithms can be biased, harmful, and even deadly

From Pioneering computer scientist calls for National Algorithm Safety Board | Techworld

Renowned computer scientist Ben Shneiderman has a plan on how to ensure algorithmic accountability. The University of Maryland professor and founder of its Human-Computer Interaction Lab outlined his strategy at the 2017 Turing Lecture on Tuesday.”What I’m proposing is a National Algorithm Safety Board,” Shneiderman told the audience in London’s British Library.The board would provide three forms of independent oversight: planning, continuous monitoring, and retrospective analysis. Combined they provide a basis to ensure the correct system is selected then supervised and lessons can be learnt to make better algorithms in future.

The story of Ferguson wasn’t algorithm-friendly. It’s not “likable.”

From Zeynep Tufekci: Machine intelligence makes human morals more important | TED Talk

Machine intelligence is here, and we’re already using it to make subjective decisions. But the complex way AI grows and improves makes it hard to understand and even harder to control. In this cautionary talk, techno-sociologist Zeynep Tufekci explains how intelligent machines can fail in ways that don’t fit human error patterns — and in ways we won’t expect or be prepared for. “We cannot outsource our responsibilities to machines,” she says. “We must hold on ever tighter to human values and human ethics.”

Another exceptional TED Talk.

Modern AIs do not read, do not understand. They only disguise as if they do.

From Noriko Arai: Can a robot pass a university entrance exam? | TED Talk

Meet Todai Robot, an AI project that performed in the top 20 percent of students on the entrance exam for the University of Tokyo — without actually understanding a thing. While it’s not matriculating anytime soon, Todai Robot’s success raises alarming questions for the future of human education. How can we help kids excel at the things that humans will always do better than AI?

The key idea of this beautiful talk:

we humans can understand the meaning. That is something which is very, very lacking in AI. But most of the students just pack the knowledge without understanding the meaning of the knowledge, so that is not knowledge, that is just memorizing, and AI can do the same thing. So we have to think about a new type of education.

Whoever becomes the leader in AI will become the ruler of the world

From ‘Whoever leads in AI will rule the world’: Putin to Russian children on Knowledge Day — RT News

“Artificial intelligence is the future, not only for Russia, but for all humankind. It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world,” Russian President Vladimir Putin said.

However, the president said he would not like to see anyone “monopolize” the field.

“If we become leaders in this area, we will share this know-how with entire world, the same way we share our nuclear technologies today,” he told students from across Russia via satellite link-up, speaking from the Yaroslavl region.

Elon Musk replies to this specific article on Twitter:

It begins ..

and

China, Russia, soon all countries w strong computer science. Competition for AI superiority at national level most likely cause of WW3 imo.

Just like a small team of 5 plus AI could overturn the market, a small, weak government plus AI could overturn the geopolitical scene. And human augmentation is a key milestone to accomplish that. I already heard multiple companies I mentioned here on H+ having collaboration with military and government agencies.

Machine-learning software didn’t just mirror those biases, it amplified them

From Machines Learn a Biased View of Women | WIRED

…Ordóñez wondering whether he and other researchers were unconsciously injecting biases into their software. So he teamed up with colleagues to test two large collections of labeled photos used to “train” image-recognition software.

Their results are illuminating. Two prominent research-image collections—including one supported by Microsoft and Facebook—display a predictable gender bias in their depiction of activities such as cooking and sports. Images of shopping and washing are linked to women, for example, while coaching and shooting are tied to men.

Machine-learning software trained on the datasets didn’t just mirror those biases, it amplified them. If a photo set generally associated women with cooking, software trained by studying those photos and their labels created an even stronger association.

Bias in artificial general intelligence may lead to catastrophic outcomes, but even the bias in “weak AI”, designed to just assist and expand human intelligence, poses a significant risk.

Perception of augmented humans might be more distorted than ever.

Lethal autonomous weapons threaten to become the third revolution in warfare

From Killer robots: World’s top AI and robotics companies urge United Nations to ban lethal autonomous weapons – Future of Life Institute

An open letter signed by 116 founders of robotics and artificial intelligence companies from 26 countries urges the United Nations to urgently address the challenge of lethal autonomous weapons (often called ‘killer robots’) and ban their use internationally.

In December 2016, 123 member nations of the UN’s Review Conference of the Convention on Conventional Weapons unanimously agreed to begin formal discussions on autonomous weapons. Of these, 19 have already called for an outright ban.

and

Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend.

PathNet: Evolution Channels Gradient Descent in Super Neural Networks

From PathNet: Evolution Channels Gradient Descent in SuperNeural Networks

For artificial general intelligence (AGI) it would be efficient if multiple users trained the same giant neural network, permitting parameter reuse, without catastrophic forgetting.
PathNet is a first step in this direction. It is a neural network algorithm that uses agents embedded in the neural network whose task is to discover which parts of the network to re-use for new tasks.

Agents are pathways (views) through the network which determine the subset of parameters that are used and updated by the forwards and backwards passes of the backpropogation algorithm. During learning, a tournament selection genetic algorithm is used to select pathways through the neural network for replication and mutation. Pathway fitness is the performance of that pathway measured according to a cost function.

We demonstrate successful transfer learning; fixing the parameters along a path learned on task A and re-evolving a new population of paths for task B, allows task B to be learned faster than it could be learned from scratch or after fine-tuning. Paths evolved on task B re-use parts of the optimal path evolved on task A.