Artificial Intelligence

Importance of Artificial Intelligence to Department of Defense

From Perspectives on Research in Artificial Intelligence and Artificial General Intelligence Relevant to DoD:

That AI and—if it were to advance significantly—AGI are of importance to DoD is so self-evident that it needs little elucidation here. Weapons systems and platforms with varying degrees of autonomy exist today in all domains of modern warfare, including air, sea (surface and underwater), and ground.

To cite a few out of many possible examples: Northrop Grumman’s X-47B is a strike fighter-sized unmanned aircraft, part of the U.S. Navy’s Unmanned Combat Air System (UCAS) Carrier Demonstration program. Currently undergoing flight testing, it is capable of aircraft carrier launch and recovery, as well as autonomous aerial refueling.4 DARPA’s Anti-Submarine Warfare Continuous Trail Unmanned Vessel (ACTUV) program recently commissioned the “Sea Hunter”, a 130 ft. unmanned trimaran optimized to robustly track quiet diesel electric submarines.
The Samsung SGR-A1 is a South Korean military robot sentry designed to replace human counterparts in the Korean demilitarized zone.
It is capable of challenging humans for a spoken password and, if it does not recognize the correct password in response, shooting them with either rubber bullets or lethal ammunition.

It is an important point that, while these systems have some degree of “autonomy” relying on the technologies of AI, they are in no sense a step—not even a small step—towards “autonomy” in the sense of AGI, that is, the ability to set independent goals or intent. Indeed, the word “autonomy” conflates two quite different meanings, one relating to “freedom of will or action” (like humans, or as in AGI), and the other the much more prosaic ability to act in accordance with a possibly complex rule set based on possibly complex sensor input, as in the word “automatic”. In using a terminology like “autonomous weapons”, the DoD may, as an unintended consequence, enhance the public’s confusion on this point.

and

At a higher strategic level, AI is recognized by DoD as a key enabling technology in a possible Third Offset Strategy.

As briefed to JASON, key elements of a Third Offset Strategy include:
(i) autonomous learning systems, e.g., in applications that require faster-than-human reaction times; (ii) human-machine collaborative decision making; (iii) assisted human operations, especially in combat; (iv) advanced strategies for collaboration between manned and unmanned platforms; and (v) network-enabled, autonomous weapons capable of operating in future cyber and electronic warfare environments. AI, as it is currently understood as a field of “6.1” basic research, will supply enabling technologies for all of these elements. At the same time, none of these elements are dependent on future advances in AGI.

JASON is an independent scientific advisory group that provides consulting services to the U.S. government on matters of defense science and technology. It was established in 1960.

JASON typically performs most of its work during an annual summer study, and has conducted studies under contract to the Department of Defense (frequently DARPA and the U.S. Navy), the Department of Energy, the U.S. Intelligence Community, and the FBI. Approximately half of the resulting JASON reports are unclassified.

Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings

From [1607.06520] Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings

The blind application of machine learning runs the risk of amplifying biases present in data. Such a danger is facing us with word embedding, a popular framework to represent text data as vectors which has been used in many machine learning and natural language processing tasks. We show that even word embeddings trained on Google News articles exhibit female/male gender stereotypes to a disturbing extent. This raises concerns because their widespread use, as we describe, often tends to amplify these biases. Geometrically, gender bias is first shown to be captured by a direction in the word embedding. Second, gender neutral words are shown to be linearly separable from gender definition words in the word embedding. Using these properties, we provide a methodology for modifying an embedding to remove gender stereotypes, such as the association between between the words receptionist and female, while maintaining desired associations such as between the words queen and female. We define metrics to quantify both direct and indirect gender biases in embeddings, and develop algorithms to “debias” the embedding. Using crowd-worker evaluation as well as standard benchmarks, we empirically demonstrate that our algorithms significantly reduce gender bias in embeddings while preserving the its useful properties such as the ability to cluster related concepts and to solve analogy tasks. The resulting embeddings can be used in applications without amplifying gender bias.

Our machines can very easily recognise you among at least 2 billion people in a matter of seconds

From Doctor, border guard, policeman – artificial intelligence in China and its mind-boggling potential to do right, or wrong | South China Morning Post

Yitu’s Dragonfly Eye generic portrait platform already has 1.8 billion photographs to work with: those logged in the national database and you, if you have visited China recently. Yitu will not say whether Hong Kong identity card holders have been logged in the government’s database, for which the company provides navigation software and algor­ithms, but 320 million of the photos have come from China’s borders, including ports and airports, where pictures are taken of everyone who enters and leaves the country.

According to Yitu, its platform is also in service with more than 20 provincial public security departments, and is used as part of more than 150 municipal public security systems across the country, and Dragonfly Eye has already proved its worth. On its very first day of operation on the Shanghai Metro, in January, the system identified a wanted man when he entered a station. After matching his face against the database, Dragonfly Eye sent his photo to a policeman, who made an arrest. In the following three months, 567 suspected lawbreakers were caught on the city’s underground network.

Imagine this performed by a human eye augmented by AR lenses or glasses.

If you think that humans will confine this sort of applications to a computer at your desk or inside your pocket, you are delusional.

% Chinese researchers contribution to best 100 AI journals/conferences

The Eurasia Group and Sinovation Ventures released a report titled China embraces AI: A Close Look and A Long View with some interesting data.

The first bit is a chart that shows how the percentage of Chinese researchers contribution to best 100 AI journals/conferences raised from 23% / 25% (authoring/citations) in 2006 to almost 43% / 56% (authoring/citations) in 2015.

The second bit is a list of Chinese AI startups, divided into research/enabling technology/commercial application categories, which also highlights domestic and foreign investors.

With the massive commitment of the Chinese government, these numbers are bound to grow significantly.

I always wondered how it would be if a superior species landed on earth and showed us how they play chess

From Google’s AlphaZero Destroys Stockfish In 100-Game Match – Chess.com

Stockfish, which for most top players is their go-to preparation tool, and which won the 2016 TCEC Championship and the 2017 Chess.com Computer Chess Championship, didn’t stand a chance. AlphaZero won the closed-door, 100-game match with 28 wins, 72 draws, and zero losses.

Oh, and it took AlphaZero only four hours to “learn” chess.

and

“We have always assumed that chess required too much empirical knowledge for a machine to play so well from scratch, with no human knowledge added at all,” Kasparov said. “Of course I’ll be fascinated to see what we can learn about chess from AlphaZero, since that is the great promise of machine learning in general—machines figuring out rules that humans cannot detect. But obviously the implications are wonderful far beyond chess and other games. The ability of a machine to replicate and surpass centuries of human knowledge in complex closed systems is a world-changing tool.”

The progress that DeepMind, and the industry in general, is making in artificial intelligence is breathtaking. Eventually, this feeling of confronting a superior species will become more and more frequent.

The notion of being, for the first time ever, the inferior species is terrifying for most humans. It implies that somebody else can do to us what we do to animals on daily basis. Homo Deus, Yuval Noah Harari new bestseller, drives you to that realization in an amazing way. I can’t recommend it enough.

Google AutoML generates its first outperforming AI child

From Google’s Artificial Intelligence Built an AI That Outperforms Any Made by Humans

In May 2017, researchers at Google Brain announced the creation of AutoML, an artificial intelligence (AI) that’s capable of generating its own AIs. More recently, they decided to present AutoML with its biggest challenge to date, and the AI that can build AI created a “child” that outperformed all of its human-made counterparts.

AutoML acts as a controller neural network that develops a child AI network for a specific task. For this particular child AI, which the researchers called NASNet, the task was recognizing objects — people, cars, traffic lights, handbags, backpacks, etc. — in a video in real-time.

and

NASNet was 82.7 percent accurate at predicting images on ImageNet’s validation set. This is 1.2 percent better than any previously published results, and the system is also 4 percent more efficient, with a 43.1 percent mean Average Precision (mAP)

and

The Google researchers acknowledge that NASNet could prove useful for a wide range of applications and have open-sourced the AI for inference on image classification and object detection. “We hope that the larger machine learning community will be able to build on these models to address multitudes of computer vision problems we have not yet imagined,” they wrote in their blog post.

Though the applications for NASNet and AutoML are plentiful, the creation of an AI that can build AI does raise some concerns. For instance, what’s to prevent the parent from passing down unwanted biases to its child? What if AutoML creates systems so fast that society can’t keep up?

We are waiting to develop a human-level artificial intelligence and see if it will improve itself to the point of becoming a superintelligence. Maybe it’s exceptionally close.

We are entering a cycle where humans and algorithms are adapting to each other

From Exploring Cooperation with Social Machines:

Humans are filling in the gaps where algorithms cannot easily function, and algorithms are calculating and processing complex information at a speed that for most humans is not possible. Together, humans and computers are sorting out which is going to do what type of task. It is a slow and tedious process that emulates a kind of sociability between entities in order to form cooperative outcomes.

Either one or both parties must yield a bit for cooperation to work, and if a program is developed in a rigid way, the yielding is usually done by the human to varying degrees of frustration as agency (our ability to make choices from a range of options) becomes constrained by the process of automation.

Indeed, sociability and social relationships depend on the assumption of agency on the part of the other, human or machine. Humans often attribute agency to machines in their assumptions underlying how the machine will satisfy their present need, or indeed inhibit them from satisfying a need.

You should also read Implementing Algorithms In The Form Of Scripts Has Been An Early Step In Training Humans To Be More Like Machines

Implementing algorithms in the form of scripts has been an early step in training humans to be more like machines

From Cooperating with Algorithms in the Workplace:

Thus, concerning algorithms at work, people are either replaced by them, required to help them, or have become them. Workplace algorithms have been evolving for some time in the form of scripts and processes that employers have put in place for efficiency, “quality control,” brand consistency, product consistency, experience consistency and most particularly, cost savings. As a result phone calls to services such as hotels, shops and restaurants, may now have a script read out loud or memorized by the employee to the customer to ensure consistent experiences and task compliance.

Consistency of experience is increasingly a goal within organizations, and implementing algorithms in the form of scripts and processes has been an early step in training humans to be more like machines. Unfortunately, these algorithms can result in an inability to cooperate in contexts not addressed by the algorithm. These scripts and corresponding processes purposely greatly restrict human agency by failing to define clear boundaries for the domain of the algorithm and recognizing the need for adaptation outside these boundaries.

Thus, often if a worker is asked a specialized or specific query, they lack the ability to respond to it and will either turn away the customer, or accelerate the query up (and down) a supervisory management chain, with each link bound by its own scripts, processes and rules, which may result in a non-answer or non-resolution for the customer.

Not only the paper is mighty interesting, but the whole body of research it belongs too is worth serious investigation.

What is consciousness, and could machines have it?

From What is consciousness, and could machines have it? | Science

The controversial question of whether machines may ever be conscious must be based on a careful consideration of how consciousness arises in the only physical system that undoubtedly possesses it: the human brain. We suggest that the word “consciousness” conflates two different types of information-processing computations in the brain: the selection of information for global broadcasting, thus making it flexibly available for computation and report (C1, consciousness in the first sense), and the self-monitoring of those computations, leading to a subjective sense of certainty or error (C2, consciousness in the second sense). We argue that despite their recent successes, current machines are still mostly implementing computations that reflect unconscious processing (C0) in the human brain. We review the psychological and neural science of unconscious (C0) and conscious computations (C1 and C2) and outline how they may inspire novel machine architectures.

We no longer know if we’re seeing the same information or what anybody else is seeing

From Zeynep Tufekci: We’re building a dystopia just to make people click on ads | TED.com

As a public and as citizens, we no longer know if we’re seeing the same information or what anybody else is seeing, and without a common basis of information, little by little, public debate is becoming impossible, and we’re just at the beginning stages of this.

and

What if the system that we do not understand was picking up that it’s easier to sell Vegas tickets to people who are bipolar and about to enter the manic phase. Such people tend to become overspenders, compulsive gamblers. They could do this, and you’d have no clue that’s what they were picking up on. I gave this example to a bunch of computer scientists once and afterwards, one of them came up to me. He was troubled and he said, “That’s why I couldn’t publish it.” I was like, “Couldn’t publish what?” He had tried to see whether you can indeed figure out the onset of mania from social media posts before clinical symptoms, and it had worked, and it had worked very well, and he had no idea how it worked or what it was picking up on.

and

Now, don’t get me wrong, we use digital platforms because they provide us with great value. I use Facebook to keep in touch with friends and family around the world. I’ve written about how crucial social media is for social movements. I have studied how these technologies can be used to circumvent censorship around the world. But it’s not that the people who run, you know, Facebook or Google are maliciously and deliberately trying to make the country or the world more polarized and encourage extremism. I read the many well-intentioned statements that these people put out. But it’s not the intent or the statements people in technology make that matter, it’s the structures and business models they’re building. And that’s the core of the problem. Either Facebook is a giant con of half a trillion dollars and ads don’t work on the site, it doesn’t work as a persuasion architecture, or its power of influence is of great concern. It’s either one or the other. It’s similar for Google, too.

Longer than usual (23 min) TED talk, but worth it.

I, too, believe that there’s no malicious intent behind the increasingly capable AI we see these days. Quite the opposite, I believe that most people working at Google or Facebook are there to make a positive impact, to change the world for the better. The problem is, on top of the business model, the fact that a lot of people, even the most brilliant ones, don’t take the time to ponder the long-term consequences of the things they are building in the way they are building them today.

The minimum dataset scale for deep learning

From Google Brain chief: Deep learning takes at least 100,000 examples | VentureBeat

“I would say pretty much any business that has tens or hundreds of thousands of customer interactions has enough scale to start thinking about using these sorts of things,” Jeff Dean, a senior fellow at Google, said in an onstage interview at the VB Summit in Berkeley, California. “If you only have 10 examples of something, it’s going to be hard to make deep learning work. If you have 100,000 things you care about, records or whatever, that’s the kind of scale where you should really start thinking about these kinds of techniques.”

The dangerous rush to build AI expertise

From Lyft’s biggest AI challenge is getting engineers up to speed | VentureBeat

Machine learning and deep learning AI have gone from the niche realm of PhDs to tools that will be used throughout all types of companies. That equates to a big skills gap, says Gil Arditi, product lead for Lyft’s Machine Learning Platform.

and

Today, of course, any engineer with a modicum of experience can spin up databases on user-friendly cloud services. That’s the path that AI processes have to travel, he says. Luckily, machine learning is making AI more accessible to newbies without a PhD in statistics, mathematics, or computer science.

“Part of the promise of machine learning in general but deep learning in particular … is that there actually is not a lot of statistical modeling,” said Arditi. “Instead of giving to the machines exact formulas that will address the problem, you just give it the tools and treat it like a black box.”

From LinkedIn plans to teach all its engineers the basics of using AI | VentureBeat

Today, of course, any engineer with a modicum of experience can spin up databases on user-friendly cloud services. That’s the path that AI processes have to travel, he says. Luckily, machine learning is making AI more accessible to newbies without a PhD in statistics, mathematics, or computer science.

“Part of the promise of machine learning in general but deep learning in particular … is that there actually is not a lot of statistical modeling,” said Arditi. “Instead of giving to the machines exact formulas that will address the problem, you just give it the tools and treat it like a black box.”

and

The academy isn’t designed to give engineers an academic grounding in machine learning as a discipline. It’s designed instead to prepare them for using AI in much the same way that they’d use a system like QuickSort, an algorithm for sorting data that’s fed into it. Users don’t have to understand how the underlying system works, they just need to know the right way to implement it.

That’s the goal for LinkedIn, Agarwal said. Thus far, six engineers have made it through the AI academy and are deploying machine learning models in production as a result of what they learned. The educational program still has a ways to go (Agarwal said he’d grade it about a “C+ at the moment) but it has the potential to drastically affect LinkedIn’s business.

From Tech Giants Are Paying Huge Salaries for Scarce A.I. Talent – The New York Times

Typical A.I. specialists, including both Ph.D.s fresh out of school and people with less education and just a few years of experience, can be paid from $300,000 to $500,000 a year or more in salary and company stock, according to nine people who work for major tech companies or have entertained job offers from them. All of them requested anonymity because they did not want to damage their professional prospects.

Well-known names in the A.I. field have received compensation in salary and shares in a company’s stock that total single- or double-digit millions over a four- or five-year period. And at some point they renew or negotiate a new contract, much like a professional athlete

and

Most of all, there is a shortage of talent, and the big companies are trying to land as much of it as they can. Solving tough A.I. problems is not like building the flavor-of-the-month smartphone app. In the entire world, fewer than 10,000 people have the skills necessary to tackle serious artificial intelligence research, according to Element AI, an independent lab in Montreal.

Two thoughts:

  • This is unprecedented in the last two decades. Not even the raise of virtualization or cloud computing triggered such a massive call to action.
  • Do you really think that all these education programs and all these rushed experts will spend any significant time on the ethical aspects of AI and long-term implications of algorithmic bias?

NATO calls for a specialists meeting about artificial intelligence in mid 2018

From NATO urged to rapidly absorb AI into its command and control | Jane’s 360

NATO advisers and industry are urging the allies to rapidly absorb artificial intelligence software into their militaries’ observe, orient, decide, act (OODA) loop or risk seeing the latter collapse in the face of adversaries’ increasingly sophisticated artificial intelligence (AI)-enabled attacks.

NATO Information Systems Technology (IST) Panel Office already arranged a 150 people meeting in Bordeaux for end of May 2018:

In order to avoid an abstract scientific discussion, the national STB representatives will engage operational experts to participate and work with the scientists towards a common road map for future research activities in NATO that meet operational needs.

Within the OODA loop the first step ‘Observe’ is about harvesting data. Intelligent integration of heterogeneous devices, architectures of acquisition systems and sensors, decentralized management of data, and autonomous collection platforms and sensors give a huge field for improvement with Natural Language Processing and Artificial Intelligence technologies for acquiring and processing Big Data. The next step ‘Orient’ is about reasoning. Analysis of social media, information fusion, anomaly detection, and behavior modeling are domains with huge potential for Machine Learning algorithms. The same is applicable for the ‘Decide’ step where predictive analytics, augmented and virtual reality and many more technologies support the operational decision-making process. A complex battlefield and high speed operations require independently acting devices to ‘Act’ with a certain degree of Autonomy. In all steps, the application of AI technologies for automated analysis, early warnings, guaranteeing trust in the Internet of Things (IoT), and distinguishing relevant from Fake Data is mandatory.

This is the escalation Nick Bostrom first (in its book Superintelligence) and Elon Musk later were talking about.

AVA: A Video Dataset of Spatio-temporally Localized Atomic Visual Actions

From [1705.08421] AVA: A Video Dataset of Spatio-temporally Localized Atomic Visual Actions

This paper introduces a video dataset of spatio-temporally localized Atomic Visual Actions (AVA). The AVA dataset densely annotates 80 atomic visual actions in 57.6k movie clips with actions localized in space and time, resulting in 210k action labels with multiple labels per human occurring frequently. The main differences with existing video datasets are: the definition of atomic visual actions, which avoids collecting data for each and every complex action; precise spatio-temporal annotations with possibly multiple annotations for each human; the use of diverse, realistic video material (movies). This departs from existing datasets for spatio-temporal action recognition, such as JHMDB and UCF datasets, which provide annotations for at most 24 composite actions, such as basketball dunk, captured in specific environments, i.e., basketball court.
We implement a state-of-the-art approach for action localization. Despite this, the performance on our dataset remains low and underscores the need for developing new approaches for video understanding. The AVA dataset is the first step in this direction, and enables the measurement of performance and progress in realistic scenarios.

Google confirms it’s using YouTube to teach AI about human actions

From Google built a dataset to teach its artificial intelligence how humans hug, cook, and fight — Quartz

Google, which owns YouTube, announced on Oct. 19 a new dataset of film clips, designed to teach machines how humans move in the world. Called AVA, or “atomic visual actions,” the videos aren’t anything special to human eyes—they’re three second clips of people drinking water and cooking curated from YouTube. But each clip is bundled with a file that outlines the person that a machine learning algorithm should watch, as well as a description of their pose, and whether they’re interacting with another human or object. It’s the digital version of pointing at a dog with a child and coaching them by saying, “dog.”

and

This technology could help Google to analyze the years of video it processes on YouTube every day. It could be applied to better target advertising based on whether you’re watching a video of people talk or fight, or in content moderation. The eventual goal is to teach computers social visual intelligence, the authors write in an accompanying research paper, which means “understanding what humans are doing, what might they do next, and what they are trying to achieve.”

Google’s video dataset is free.

In 2015, I speculated on Twitter:

I wonder if @google already has enough @youtube videos to create a video version of Wikipedia (and if they already are machine learning it)

We want the UAE to become the world’s most prepared country for artificial intelligence 

From Mohammad Bin Rashid reveals reshuffled UAE Cabinet | Gulfnews.com

The new government – the 13th in the UAE’s history – sees the appointment of Omar Bin Sultan Al Olama (right), 27, as the State Minister for Artificial Intelligence.

“We want the UAE to become the world’s most prepared country for artificial intelligence,” Shaikh Mohammad said.

Shaikh Mohammad added the new phase focuses on “future skills, future sciences and future technology, as we prepare for the centenary to ensure a better future for our generations”.

After Russia and China, the United Arab Emirates wants to make clear, too, that AI is a strategic advantage and a top priority.

Mastering the game of Go without human knowledge

From Mastering the game of Go without human knowledge : Nature

A long-standing goal of artificial intelligence is an algorithm that learns, tabula rasa, superhuman proficiency in challenging domains. Recently, AlphaGo became the first program to defeat a world champion in the game of Go. The tree search in AlphaGo evaluated positions and selected moves using deep neural networks. These neural networks were trained by supervised learning from human expert moves, and by reinforcement learning from self-play. Here we introduce an algorithm based solely on reinforcement learning, without human data, guidance or domain knowledge beyond game rules. AlphaGo becomes its own teacher: a neural network is trained to predict AlphaGo’s own move selections and also the winner of AlphaGo’s games. This neural network improves the strength of the tree search, resulting in higher quality move selection and stronger self-play in the next iteration. Starting tabula rasa, our new program AlphaGo Zero achieved superhuman performance, winning 100–0 against the previously published, champion-defeating AlphaGo.

Accumulating thousands of years of human knowledge during a period of just a few days

From AlphaGo Zero: Learning from scratch | DeepMind

It is able to do this by using a novel form of reinforcement learning, in which AlphaGo Zero becomes its own teacher. The system starts off with a neural network that knows nothing about the game of Go. It then plays games against itself, by combining this neural network with a powerful search algorithm. As it plays, the neural network is tuned and updated to predict moves, as well as the eventual winner of the games.

This updated neural network is then recombined with the search algorithm to create a new, stronger version of AlphaGo Zero, and the process begins again. In each iteration, the performance of the system improves by a small amount, and the quality of the self-play games increases, leading to more and more accurate neural networks and ever stronger versions of AlphaGo Zero.

and

After just three days of self-play training, AlphaGo Zero emphatically defeated the previously published version of AlphaGo – which had itself defeated 18-time world champion Lee Sedol – by 100 games to 0. After 40 days of self training, AlphaGo Zero became even stronger, outperforming the version of AlphaGo known as “Master”, which has defeated the world’s best players and world number one Ke Jie.

The new AlphaGo Zero is not impressive just because it uses no data set to become the world leader at what it does. It’s impressive also because it achieves the goal at a pace no human will ever be able to match.

OpenAI achieves Continuous Agent Adaptation via Meta-Learning

From Adaptation via Meta-Learning

We’ve evolved a population of 1050 agents of different anatomies (Ant, Bug, Spider), policies (MLP, LSTM), and adaptation strategies (PPO-tracking, RL^2, meta-updates) for 10 epochs. Initially, we had an equal number of agents of each type. Every epoch, we randomly matched 1000 pairs of agents and made them compete and adapt in multi-round games against each other. The agents that lost disappeared from the population, while the winners replicated themselves.

Summary: After a few epochs of evolution, Spiders, being the weakest, disappeared, the subpopulation of Bugs more than doubled, the Ants stayed the same. Importantly, the agents with meta-learned adaptation strategies end up dominating the population.

OpenAI has developed a “learning to learn” (or meta-learning) framework that allows an AI agent to continuously adapt to a dynamic environment, at least in certain conditions. The environment is dynamic for a number of reasons, including the fact that opponents are learning as well.

AI agents equipped with the meta-learning framework win more fights against their opponents and eventually dominate the environment. Be sure to watch the last video to see the effect.

The meta-learning framework gives the selected AI agents the capability to predict and anticipate the changes in the environment and adapt faster than the AI agents that only learn from direct experience.

We know that the neocortex is a prediction machine and that human intelligence amounts to the capability to anticipate and adapt. This research is a key step towards artificial general intelligence.

Continuous Adaptation via Meta-Learning in Nonstationary and Competitive Environments

From [1710.03641] Continuous Adaptation via Meta-Learning in Nonstationary and Competitive Environments

Ability to continuously learn and adapt from limited experience in nonstationary environments is an important milestone on the path towards general intelligence. In this paper, we cast the problem of continuous adaptation into the learning-to-learn framework. We develop a simple gradient-based meta-learning algorithm suitable for adaptation in dynamically changing and adversarial scenarios. Additionally, we design a new multi-agent competitive environment, RoboSumo, and define iterated adaptation games for testing various aspects of continuous adaptation strategies. We demonstrate that meta-learning enables significantly more efficient adaptation than reactive baselines in the few-shot regime. Our experiments with a population of agents that learn and compete suggest that meta-learners are the fittest.

“Be careful; things can be worse than they appear”: Understanding Biased Algorithms and Users’ Behavior around Them in Rating Platforms

From http://social.cs.uiuc.edu/papers/ICWSM17-PrePrint.pdf

Awareness of bias in algorithms is growing among scholars and users of algorithmic systems. But what can we observe about how users discover and behave around such biases?
We used a cross-platform audit technique that analyzed online ratings of 803 hotels across three hotel rating platforms and found that one site’s algorithmic rating system biased ratings, particularly low-to-medium quality hotels, significantly higher than others (up to 37%).

Analyzing reviews of 162 users who independently discovered this bias, we seek to understand if, how, and in what ways users perceive and manage this bias. Users changed the typical ways they used a review on a hotel rating platform to instead discuss the rating system itself and raise other users’ awareness of the rating bias. This raising of awareness included practices like efforts to reverse engineer the rating algorithm, efforts to correct the bias, and demonstrations of broken trust.

We conclude with a discussion of how such behavior patterns might inform design approaches that anticipate unexpected bias and provide reliable means for meaningful bias discovery and response.

The hippocampus as a predictive map

From The hippocampus as a predictive map : Nature

A cognitive map has long been the dominant metaphor for hippocampal function, embracing the idea that place cells encode a geometric representation of space. However, evidence for predictive coding, reward sensitivity and policy dependence in place cells suggests that the representation is not purely spatial. We approach this puzzle from a reinforcement learning perspective: what kind of spatial representation is most useful for maximizing future reward? We show that the answer takes the form of a predictive representation. This representation captures many aspects of place cell responses that fall outside the traditional view of a cognitive map. Furthermore, we argue that entorhinal grid cells encode a low-dimensionality basis set for the predictive representation, useful for suppressing noise in predictions and extracting multiscale structure for hierarchical planning.

DeepMind thinks that the hippocampus summarizes future events using a “predictive map”

From The hippocampus as a ‘predictive map’ | DeepMind

Our insights were derived from reinforcement learning, the subdiscipline of AI research that focuses on systems that learn by trial and error. The key computational idea we drew on is that to estimate future reward, an agent must first estimate how much immediate reward it expects to receive in each state, and then weight this expected reward by how often it expects to visit that state in the future. By summing up this weighted reward across all possible states, the agent obtains an estimate of future reward.

Similarly, we argue that the hippocampus represents every situation – or state – in terms of the future states which it predicts. For example, if you are leaving work (your current state) your hippocampus might represent this by predicting that you will likely soon be on your commute, picking up your kids from school or, more distantly, at home. By representing each current state in terms of its anticipated successor states, the hippocampus conveys a compact summary of future events, known formally as the “successor representation”. We suggest that this specific form of predictive map allows the brain to adapt rapidly in environments with changing rewards, but without having to run expensive simulations of the future.

I wonder what Jeff Hawkins thinks about this new theory.

Why Does the Neocortex Have Columns, A Theory of Learning the Structure of the World

From Why Does the Neocortex Have Columns, A Theory of Learning the Structure of the World | bioRxiv

Neocortical regions are organized into columns and layers. Connections between layers run mostly perpendicular to the surface suggesting a columnar functional organization. Some layers have long-range excitatory lateral connections suggesting interactions between columns. Similar patterns of connectivity exist in all regions but their exact role remain a mystery. In this paper, we propose a network model composed of columns and layers that performs robust object learning and recognition. Each column integrates its changing input over time to learn complete predictive models of observed objects. Excitatory lateral connections across columns allow the network to more rapidly infer objects based on the partial knowledge of adjacent columns. Because columns integrate input over time and space, the network learns models of complex objects that extend well beyond the receptive field of individual cells. Our network model introduces a new feature to cortical columns. We propose that a representation of location relative to the object being sensed is calculated within the sub-granular layers of each column. The location signal is provided as an input to the network, where it is combined with sensory data. Our model contains two layers and one or more columns. Simulations show that using Hebbian-like learning rules small single-column networks can learn to recognize hundreds of objects, with each object containing tens of features. Multi-column networks recognize objects with significantly fewer movements of the sensory receptors. Given the ubiquity of columnar and laminar connectivity patterns throughout the neocortex, we propose that columns and regions have more powerful recognition and modeling capabilities than previously assumed.

Not on artificial intelligence per se, but Jeff Hawkins was the first to suggest a unifying theory of how the brain works in 2005 with his book On Intelligence. It’s interesting to see how the theory has been refined in last 12 years and how it might influence today’s development of AI algorithms.

Chinese state plan to dominate AI by 2030

From China’s Plan to ‘Lead’ in AI: Purpose, Prospects, and Problems

The plan prescribes a high level of government investment in theoretical and applied AI breakthroughs (see Part III below for more), while also acknowledging that, in China as around the world, private companies are currently leading the charge on commercial applications of AI.

The plan acknowledges, meanwhile, that China remains far behind world leaders in development of key hardware enablers of AI, such as microchips suited for machine learning use (e.g., GPUs or re-configurable processors). The plan’s ambition is underlined by its recognition of the hard road ahead.

and

China is embarking upon an agenda of “intelligentization” (智能化), seeking to take advantage of the transformative potential of AI throughout society, the economy, government, and the military. Through this new plan, China intends to pursue “indigenous innovation” in the “strategic frontier” technology of AI in furtherance of a national strategy for innovation-driven development.

the Chinese government is encouraging its own AI enterprises to pursue an approach of “going out,” including through overseas mergers and acquisitions, equity investments, and venture capital, as well as the establishment of research and development centers abroad.

China plans to develop resources and ecosystems conducive to the goal of becoming a “premier innovation center” in AI science and technology by 2030. In support of this goal, the plan calls for an “open source and open” approach that takes advantage of synergies among industry, academia, research, and applications, including through creating AI “innovation clusters.”

and

the Chinese leadership wants to ensure that advances in AI can be leveraged for national defense, through a national strategy for military-civil fusion (军民融合). According to the plan, resources and advances will be shared and transferred between civilian and military contexts. This will involve the establishment and normalizing of mechanisms for communication and coordination among scientific research institutes, universities, enterprises, and military industry.

Full translation of China’s State Council Notice on the Issuance of the Next Generation Artificial Intelligence Development Plan – Both the original document and the commentary on NewAmerica are critical reads.

Startup generates and sells synthetic data for AI training

From Home – Neuromation

We propose a solution whose accuracy is guaranteed by construction: synthesizing large datasets along with perfectly accurate labels. The benefits of synthetic data are manifold. It is fast to synthesize and render, perfectly accurate, tailored for the task at hand, and can be modified to improve the model and training itself. It is important to note that real data with accurate labels is still required for evaluating models trained on synthetic data, in order to guarantee acceptable performance at inference time. However, the amount of validation data required is orders of magnitude smaller than training data!

They generate and sell synthetic datasets for AI training. All data is charged per item, and comes pre-labelled.

All transactions get done using Ethereum extended ECR-20 compliant token. People can mine tokens by performing computationally intensive tasks of data generation and model training instead of mining cryptocurrency.

Nick Bostrom joins newly formed Ethics & Society research group at DeepMind

From DeepMind launches new research team to investigate AI ethics – The Verge

Google’s AI subsidiary DeepMind is getting serious about ethics. The UK-based company, which Google bought in 2014, today announced the formation of a new research group dedicated to the thorniest issues in artificial intelligence. These include the problems of managing AI bias; the coming economic impact of automation; and the need to ensure that any intelligent systems we develop share our ethical and moral values.

DeepMind Ethics & Society (or DMES, as the new team has been christened) will publish research on these topics and others starting early 2018. The group has eight full-time staffers at the moment, but DeepMind wants to grow this to around 25 in a year’s time. The team has six unpaid external “fellows” (including Oxford philosopher Nick Bostrom, who literally wrote the book on AI existential risk) and will partner with academic groups conducting similar research, including The AI Now Institute at NYU, and the Leverhulme Centre for the Future of Intelligence.

Great effort. I’d love to attend a conference arranged by groups like this one.

Information Bottleneck Theory might explain how deep (and human) learning works

From New Theory Cracks Open the Black Box of Deep Learning | Quanta Magazine

Experts wonder what it is about deep learning that enables generalization — and to what extent brains apprehend reality in the same way.

Last month, a YouTube video of a conference talk in Berlin, shared widely among artificial-intelligence researchers, offered a possible answer. In the talk, Naftali Tishby, a computer scientist and neuroscientist from the Hebrew University of Jerusalem, presented evidence in support of a new theory explaining how deep learning works. Tishby argues that deep neural networks learn according to a procedure called the “information bottleneck,” which he and two collaborators first described in purely theoretical terms in 1999. The idea is that a network rids noisy input data of extraneous details as if by squeezing the information through a bottleneck, retaining only the features most relevant to general concepts.

and

According to Tishby, who views the information bottleneck as a fundamental principle behind learning, whether you’re an algorithm, a housefly, a conscious being, or a physics calculation of emergent behavior, that long-awaited answer “is that the most important part of learning is actually forgetting.”

but

Brenden Lake, an assistant professor of psychology and data science at New York University who studies similarities and differences in how humans and machines learn, said that Tishby’s findings represent “an important step towards opening the black box of neural networks,” but he stressed that the brain represents a much bigger, blacker black box. Our adult brains, which boast several hundred trillion connections between 86 billion neurons, in all likelihood employ a bag of tricks to enhance generalization, going beyond the basic image- and sound-recognition learning procedures that occur during infancy and that may in many ways resemble deep learning.

For instance, Lake said the fitting and compression phases that Tishby identified don’t seem to have analogues in the way children learn handwritten characters, which he studies. Children don’t need to see thousands of examples of a character and compress their mental representation over an extended period of time before they’re able to recognize other instances of that letter and write it themselves. In fact, they can learn from a single example.

The video is here.

Top academic and industry minds in a panel about the future of AI

From Superintelligence: Science or Fiction? | Elon Musk & Other Great Minds – YouTube

From left to right: Elon Musk (Tesla, SpaceX), Stuart Russell (University Berkeley), Bart Selman (Cornell University), Ray Kurzweil (Google, inventor, futurist), David Chalmers (New York University, Australian National University, philosopher), Nick Bostrom (University of Oxford, philosopher), Demis Hassabis (DeepMind), Sam Harris (author, philosopher, neuroscientist, atheist), and Jaan Tallinn (Skype, Kaaza) discuss with Max Tegmark (moderator) what likely outcomes might be if we succeed in building human-level AGI, and also what we would like to happen.

Max Tegmark put in a room some of the brightest minds of our times to discuss Artificial General Intelligence and Superintelligence. This is the video of the most significative panel at that event, the Beneficial AI 2017 conference.

It’s 1h video, totally worth your time.

On Cognitive Computing vs Artificial Intelligence

From Ginni Rometty on Artificial Intelligence – Bloomberg

Doctors don’t want black-and-white answers, nor does any profession. If you’re a professional, my guess is when you interact with AI, you don’t want it to say, “Here is an answer.” What a doctor wants is, “OK, give me the possible answers. Tell my why you believe it. Can I see the research, the evidence, the ‘percent confident’? What more would you like to know?”

and

When I went to Davos in January, we published something called Transparency and Trust in the Cognitive Era. It’s our responsibility if we build this stuff to guide it safely into the world. First, be clear on the purpose, work with man. We aren’t out here to destroy man. The second is to be transparent about who trained the computers, who are the experts, where did the data come from. And when consumers are using AI, you inform them that they are and inform the company as well that owns the intellectual property. And the third thing is to be committed to skill.

IBM and its term “cognitive computing” are all about so-called “weak AI”. The problem is that giving the insight about an answer is incredibly challenging at the moment vs just giving the answer in a black-box fashion.

The electromagnetic spectrum is now the new high ground on the battlefield

From Artificial Intelligence Could Help Neutralize Enemy Bombs

Capt. Scott Kraft, commanding officer at the Naval Surface Warfare Center Indian Head technology division in Maryland, said artificial intelligence and big data analytics could potentially help technicians more quickly recognize exactly what type of bomb they are dealing with and choose the best option for neutralizing it. The vast amount of data collected during the past 16 years of war could be exploited to make faster decisions in combat situations, he said.

and

AI could also help EOD forces defeat electronic warfare threats by detecting sources of transmission and interference, officials said.

“The electromagnetic spectrum is now the new high ground on the battlefield,” Young said. U.S. troops “have to have situational awareness of it, what’s happening and why, and if we don’t we’re going to be at a disadvantage.”

Signals interference can impede the operations of robots and other EOD tools.

“If you’ve been to theater lately … you’ve heard about a lot of the counter-UAS systems along with all the jammers, along with all the electronic warfare systems,” Young said.

“It becomes very complex. So we want to try to simplify that” for operators that aren’t EW experts, Young said.

The whole article is about artificial intelligence and drone technologies applied to explosive ordnance disposal. However, reading it, it’s easy to see how AI is considered a strategic weapon and could be used for many applications, not just improvised explosive device (IED) discovery and disposal. And some military organizations have very large data sets to train AI.

The possible applications go all the way to the supersoldier scenarios, as I heard from at least one startup.

No surprise Putin said that whoever leads in AI will rule the world.

Real-time people and object recognition for check-out at a retail shop

From Autonomous Checkout, Real Time System v0.21 – YouTube

This is a real time demonstration of our autonomous checkout system, running at 30 FPS. This system includes our models for person detection, entity tracking, item detection, item classification, ownership resolution, action analysis, and shopper inventory analysis, all working together to visualize which person has what item in real time.

A few days ago, I shared a TED Talk about real-time face recognition. It was impressive. What I am sharing right now is even more impressive: real-time people and object recognition during online shopping.

Online shopping is just one (very lucrative) application. The technology shown in this video has been developed by a company called Standard Cognition, but it’s very likely similar to the one that Amazon is testing in their first retail shop.

Of course, there are many other applications, like surveillance for law enforcement, or information gathering for “smart communication”. Imagine this technology used in augmented reality.

Once smart contact lenses will be a reality, this will be inevitable.

AI can correctly distinguish between gay and heterosexual men in 81% of cases, and in 74% of cases for women

From Deep neural networks are more accurate than humans at detecting sexual orientation from facial images | PsyArXiv Preprints

We show that faces contain much more information about sexual orientation than can be perceived and interpreted by the human brain. We used deep neural networks to extract features from 35,326 facial images. These features were entered into a logistic regression aimed at classifying sexual orientation.

Given a single facial image, a classifier could correctly distinguish between gay and heterosexual men in 81% of cases, and in 74% of cases for women. Human judges achieved much lower accuracy: 61% for men and 54% for women. The accuracy of the algorithm increased to 91% and 83%, respectively, given five facial images per person. Facial features employed by the classifier included both fixed (e.g., nose shape) and transient facial features (e.g., grooming style).

Consistent with the prenatal hormone theory of sexual orientation, gay men and women tended to have gender-atypical facial morphology, expression, and grooming styles. Prediction models aimed at gender alone allowed for detecting gay males with 57% accuracy and gay females with 58% accuracy.

Those findings advance our understanding of the origins of sexual orientation and the limits of human perception. Additionally, given that companies and governments are increasingly using computer vision algorithms to detect people’s intimate traits, our findings expose a threat to the privacy and safety of gay men and women.

Let me reiterate: The accuracy of the algorithm increased to 91% and 83%, respectively, given five facial images per person.

Imagine if this analysis would be incorporated into the hiring process and used to discriminate candidates.

I think that the algorithms can be biased, harmful, and even deadly

From Pioneering computer scientist calls for National Algorithm Safety Board | Techworld

Renowned computer scientist Ben Shneiderman has a plan on how to ensure algorithmic accountability. The University of Maryland professor and founder of its Human-Computer Interaction Lab outlined his strategy at the 2017 Turing Lecture on Tuesday.”What I’m proposing is a National Algorithm Safety Board,” Shneiderman told the audience in London’s British Library.The board would provide three forms of independent oversight: planning, continuous monitoring, and retrospective analysis. Combined they provide a basis to ensure the correct system is selected then supervised and lessons can be learnt to make better algorithms in future.

The story of Ferguson wasn’t algorithm-friendly. It’s not “likable.”

From Zeynep Tufekci: Machine intelligence makes human morals more important | TED Talk

Machine intelligence is here, and we’re already using it to make subjective decisions. But the complex way AI grows and improves makes it hard to understand and even harder to control. In this cautionary talk, techno-sociologist Zeynep Tufekci explains how intelligent machines can fail in ways that don’t fit human error patterns — and in ways we won’t expect or be prepared for. “We cannot outsource our responsibilities to machines,” she says. “We must hold on ever tighter to human values and human ethics.”

Another exceptional TED Talk.

Modern AIs do not read, do not understand. They only disguise as if they do.

From Noriko Arai: Can a robot pass a university entrance exam? | TED Talk

Meet Todai Robot, an AI project that performed in the top 20 percent of students on the entrance exam for the University of Tokyo — without actually understanding a thing. While it’s not matriculating anytime soon, Todai Robot’s success raises alarming questions for the future of human education. How can we help kids excel at the things that humans will always do better than AI?

The key idea of this beautiful talk:

we humans can understand the meaning. That is something which is very, very lacking in AI. But most of the students just pack the knowledge without understanding the meaning of the knowledge, so that is not knowledge, that is just memorizing, and AI can do the same thing. So we have to think about a new type of education.

Whoever becomes the leader in AI will become the ruler of the world

From ‘Whoever leads in AI will rule the world’: Putin to Russian children on Knowledge Day — RT News

“Artificial intelligence is the future, not only for Russia, but for all humankind. It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world,” Russian President Vladimir Putin said.

However, the president said he would not like to see anyone “monopolize” the field.

“If we become leaders in this area, we will share this know-how with entire world, the same way we share our nuclear technologies today,” he told students from across Russia via satellite link-up, speaking from the Yaroslavl region.

Elon Musk replies to this specific article on Twitter:

It begins ..

and

China, Russia, soon all countries w strong computer science. Competition for AI superiority at national level most likely cause of WW3 imo.

Just like a small team of 5 plus AI could overturn the market, a small, weak government plus AI could overturn the geopolitical scene. And human augmentation is a key milestone to accomplish that. I already heard multiple companies I mentioned here on H+ having collaboration with military and government agencies.

Machine-learning software didn’t just mirror those biases, it amplified them

From Machines Learn a Biased View of Women | WIRED

…Ordóñez wondering whether he and other researchers were unconsciously injecting biases into their software. So he teamed up with colleagues to test two large collections of labeled photos used to “train” image-recognition software.

Their results are illuminating. Two prominent research-image collections—including one supported by Microsoft and Facebook—display a predictable gender bias in their depiction of activities such as cooking and sports. Images of shopping and washing are linked to women, for example, while coaching and shooting are tied to men.

Machine-learning software trained on the datasets didn’t just mirror those biases, it amplified them. If a photo set generally associated women with cooking, software trained by studying those photos and their labels created an even stronger association.

Bias in artificial general intelligence may lead to catastrophic outcomes, but even the bias in “weak AI”, designed to just assist and expand human intelligence, poses a significant risk.

Perception of augmented humans might be more distorted than ever.

Lethal autonomous weapons threaten to become the third revolution in warfare

From Killer robots: World’s top AI and robotics companies urge United Nations to ban lethal autonomous weapons – Future of Life Institute

An open letter signed by 116 founders of robotics and artificial intelligence companies from 26 countries urges the United Nations to urgently address the challenge of lethal autonomous weapons (often called ‘killer robots’) and ban their use internationally.

In December 2016, 123 member nations of the UN’s Review Conference of the Convention on Conventional Weapons unanimously agreed to begin formal discussions on autonomous weapons. Of these, 19 have already called for an outright ban.

and

Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend.

PathNet: Evolution Channels Gradient Descent in Super Neural Networks

From PathNet: Evolution Channels Gradient Descent in SuperNeural Networks

For artificial general intelligence (AGI) it would be efficient if multiple users trained the same giant neural network, permitting parameter reuse, without catastrophic forgetting.
PathNet is a first step in this direction. It is a neural network algorithm that uses agents embedded in the neural network whose task is to discover which parts of the network to re-use for new tasks.

Agents are pathways (views) through the network which determine the subset of parameters that are used and updated by the forwards and backwards passes of the backpropogation algorithm. During learning, a tournament selection genetic algorithm is used to select pathways through the neural network for replication and mutation. Pathway fitness is the performance of that pathway measured according to a cost function.

We demonstrate successful transfer learning; fixing the parameters along a path learned on task A and re-evolving a new population of paths for task B, allows task B to be learned faster than it could be learned from scratch or after fine-tuning. Paths evolved on task B re-use parts of the optimal path evolved on task A.

Explosion AI releases a free annotation tool for data scientists

From Prodigy: A new tool for radically efficient machine teaching | Explosion AI

Prodigy addresses the big remaining problem: annotation and training. The typical approach to annotation forces projects into an uncomfortable waterfall process. The experiments can’t begin until the first batch of annotations are complete, but the annotation team can’t start until they receive the annotation manuals. To produce the annotation manuals, you need to know what statistical models will be required for the features you’re trying to build. Machine learning is an inherently uncertain technology, but the waterfall annotation process relies on accurate upfront planning. The net result is a lot of wasted effort.

Prodigy solves this problem by letting data scientists conduct their own annotations, for rapid prototyping. Ideas can be tested faster than the first planning meeting could even be scheduled. We also expect Prodigy to reduce costs for larger projects, but it’s the increased agility we’re most excited about. Data science projects are said to have uneven returns, like start-ups: a minority of projects are very successful, recouping costs for a larger number of failures. If so, the most important problem is to find more winners. Prodigy helps you do that, because you get to try things much faster.

How AI could learn about human behavior from YouTube

From Joseph Redmon: How computers learn to recognize objects instantly | TED.com

Joseph Redmon works on the YOLO (You Only Look Once) system, an open-source method of object detection that can identify objects in images and video — from zebras to stop signs — with lightning-quick speed. In a remarkable live demo, Redmon shows off this important step forward for applications like self-driving cars, robotics and even cancer detection.

A few years ago, on my personal Twitter account, I suggested that Google side benefit of owning YouTube would be having the largest archive of human activities on video to train its AI. What Redmon did here is what I had in mind at that time.

By the way, the demonstration during the TED talk is impressive.

Ray Kurzweil on augmenting the human brain through AI, nanorobotics and cloud computing

From Ray Kurzweil: Get ready for hybrid thinking | TED.com

Two hundred million years ago, our mammal ancestors developed a new brain feature: the neocortex. This stamp-sized piece of tissue (wrapped around a brain the size of a walnut) is the key to what humanity has become. Now, futurist Ray Kurzweil suggests, we should get ready for the next big leap in brain power, as we tap into the computing power in the cloud.

Speaking of AI augmenting human intelligence rather than replacing, Ray Kurzweil popularized the idea in 2014 suggesting that nanorobotics could do the trick in just a few decades.

Remember that he works for Google.

Apple sees AI as an augmentation of human intelligence, not a replacement

From Tom Gruber: How AI can enhance our memory, work and social lives | TED.com

Tom Gruber, co-creator of Siri, wants to make “humanistic AI” that augments and collaborates with us instead of competing with (or replacing) us. He shares his vision for a future where AI helps us achieve superhuman performance in perception, creativity and cognitive function — from turbocharging our design skills to helping us remember everything we’ve ever read and the name of everyone we’ve ever met.

The video is short but gives a very clear idea of how Apple is thinking about AI and what the future applications could be.

We try to engineer AI without understanding intelligence or cognition first

From What an artificial intelligence researcher fears about AI

…as AI designs get even more complex and computer processors even faster, their skills will improve. That will lead us to give them more responsibility, even as the risk of unintended consequences rises. We know that “to err is human,” so it is likely impossible for us to create a truly safe system.

and

We could set up our virtual environments to give evolutionary advantages to machines that demonstrate kindness, honesty and empathy. This might be a way to ensure that we develop more obedient servants or trustworthy companions and fewer ruthless killer robots.

and

As researchers, and as a society, we have not yet come up with a clear idea of what we want AI to do or become. In part, of course, this is because we don’t yet know what it’s capable of.

Wonderful blog post. Artificial intelligence experts face scientific, legal, moral and ethical dilemmas like no other expert before in our history.

Facebook AI Evolves Its Language from Plain English To Something New

From AI Is Inventing Languages Humans Can’t Understand. Should We Stop It?

At first, they were speaking to each other in plain old English. But then researchers realized they’d made a mistake in programming.

“There was no reward to sticking to English language,” says Dhruv Batra, visiting research scientist from Georgia Tech at Facebook AI Research (FAIR). As these two agents competed to get the best deal–a very effective bit of AI vs. AI dogfighting researchers have dubbed a “generative adversarial network”–neither was offered any sort of incentive for speaking as a normal person would. So they began to diverge, eventually rearranging legible words into seemingly nonsensical sentences.

and

Should we allow AI to evolve its dialects for specific tasks that involve speaking to other AIs? To essentially gossip out of our earshot? Maybe; it offers us the possibility of a more interoperable world, a more perfect place where iPhones talk to refrigerators that talk to your car without a second thought.

The tradeoff is that we, as humanity, would have no clue what those machines were actually saying to one another.

What if artificial intelligence would help humans to develop a more efficient, universal language?

AI and machine learning algorithms helped predict instances of schizophrenia with 74% accuracy

From IBM News room – 2017-07-21 IBM and University of Alberta Publish New Data on Machine Learning Algorithms to Help Predict Schizophrenia

In the paper, researchers analyzed de-identified brain functional Magnetic Resonance Imaging (fMRI) data from the open data set, Function Biomedical Informatics Research Network (fBIRN) for patients with schizophrenia and schizoaffective disorders, as well as a healthy control group. fMRI measures brain activity through blood flow changes in particular areas of the brain.

Specifically, the fBIRN data set reflects research done on brain networks at different levels of resolution, from data gathered while study participants conducted a common auditory test. Examining scans from 95 participants, researchers used machine learning techniques to develop a model of schizophrenia that identifies the connections in the brain most associated with the illness.

Vicarious gets another $50M to attempt building artificial general intelligence

From Khosla Ventures leads $50 million investment in Vicarious’ AI tech | VentureBeat | Entrepreneur | by Bérénice Magistretti

The Union City, California-based startup is using computational neuroscience to build better machine learning models that help robots quickly address a wide variety of tasks. Vicarious focuses on the neocortex, a part of the brain concerned with sight and hearing.

“We aren’t trying to emulate the brain exactly,” wrote Vicarious cofounder and CEO Scott Phoenix, in an email to VentureBeat. “A good way to think about it is airplanes and birds. When building a plane, you want to borrow relevant features from birds, like low body weight and deformable wings, without getting into irrelevant details like feather colors and flapping.”

I think this quote is deeply inspired by the book Superintelligence by Nick Bostrom. Which is not surprising as Vicarous is trying to build the holy grail of AI: an artificial general intelligence.

They have the most impressive list of investors I have seen in a long time.

And yet, AI is easier to trick than people think

From Robust Adversarial Examples

We’ve created images that reliably fool neural network classifiers when viewed from varied scales and perspectives. This challenges a claim from last week that self-driving cars would be hard to trick maliciously since they capture images from multiple scales, angles, perspectives, and the like.

This innocuous kitten photo, printed on a standard color printer, fools the classifier into thinking it’s a monitor or desktop computer regardless of how its zoomed or rotated. We expect further parameter tuning would also remove any human-visible artifacts.

Watch the videos.

I have never seen Elon Musk so concerned about AI

From Elon Musk says we need to regulate AI before it becomes a danger to humanity – The Verge

“I have exposure to the very cutting edge AI, and I think people should be really concerned about it,” Musk told attendees at the National Governors Association Summer Meeting on Saturday. “I keep sounding the alarm bell, but until people see robots going down the street killing people, they don’t know how to react, because it seems so ethereal.”

The solution, says Musk, is regulation: “AI is a rare case where we need to be proactive about regulation instead of reactive. Because I think by the time we are reactive in AI regulation, it’s too late.” He added that what he sees as the current model of regulation, in which governments step in only after “a whole bunch of bad things happen,” is inadequate for AI because the technology represents “a fundamental risk to the existence of civilization.”

He doesn’t hold words anymore. He must have seen something that truly terrified him.

The full video is here: https://www.youtube.com/watch?v=2C-A797y8dA