Artificial Intelligence

Mastering the game of Go without human knowledge

From Mastering the game of Go without human knowledge : Nature

A long-standing goal of artificial intelligence is an algorithm that learns, tabula rasa, superhuman proficiency in challenging domains. Recently, AlphaGo became the first program to defeat a world champion in the game of Go. The tree search in AlphaGo evaluated positions and selected moves using deep neural networks. These neural networks were trained by supervised learning from human expert moves, and by reinforcement learning from self-play. Here we introduce an algorithm based solely on reinforcement learning, without human data, guidance or domain knowledge beyond game rules. AlphaGo becomes its own teacher: a neural network is trained to predict AlphaGo’s own move selections and also the winner of AlphaGo’s games. This neural network improves the strength of the tree search, resulting in higher quality move selection and stronger self-play in the next iteration. Starting tabula rasa, our new program AlphaGo Zero achieved superhuman performance, winning 100–0 against the previously published, champion-defeating AlphaGo.

Accumulating thousands of years of human knowledge during a period of just a few days

From AlphaGo Zero: Learning from scratch | DeepMind

It is able to do this by using a novel form of reinforcement learning, in which AlphaGo Zero becomes its own teacher. The system starts off with a neural network that knows nothing about the game of Go. It then plays games against itself, by combining this neural network with a powerful search algorithm. As it plays, the neural network is tuned and updated to predict moves, as well as the eventual winner of the games.

This updated neural network is then recombined with the search algorithm to create a new, stronger version of AlphaGo Zero, and the process begins again. In each iteration, the performance of the system improves by a small amount, and the quality of the self-play games increases, leading to more and more accurate neural networks and ever stronger versions of AlphaGo Zero.

and

After just three days of self-play training, AlphaGo Zero emphatically defeated the previously published version of AlphaGo – which had itself defeated 18-time world champion Lee Sedol – by 100 games to 0. After 40 days of self training, AlphaGo Zero became even stronger, outperforming the version of AlphaGo known as “Master”, which has defeated the world’s best players and world number one Ke Jie.

The new AlphaGo Zero is not impressive just because it uses no data set to become the world leader at what it does. It’s impressive also because it achieves the goal at a pace no human will ever be able to match.

OpenAI achieves Continuous Agent Adaptation via Meta-Learning

From Adaptation via Meta-Learning

We’ve evolved a population of 1050 agents of different anatomies (Ant, Bug, Spider), policies (MLP, LSTM), and adaptation strategies (PPO-tracking, RL^2, meta-updates) for 10 epochs. Initially, we had an equal number of agents of each type. Every epoch, we randomly matched 1000 pairs of agents and made them compete and adapt in multi-round games against each other. The agents that lost disappeared from the population, while the winners replicated themselves.

Summary: After a few epochs of evolution, Spiders, being the weakest, disappeared, the subpopulation of Bugs more than doubled, the Ants stayed the same. Importantly, the agents with meta-learned adaptation strategies end up dominating the population.

OpenAI has developed a “learning to learn” (or meta-learning) framework that allows an AI agent to continuously adapt to a dynamic environment, at least in certain conditions. The environment is dynamic for a number of reasons, including the fact that opponents are learning as well.

AI agents equipped with the meta-learning framework win more fights against their opponents and eventually dominate the environment. Be sure to watch the last video to see the effect.

The meta-learning framework gives the selected AI agents the capability to predict and anticipate the changes in the environment and adapt faster than the AI agents that only learn from direct experience.

We know that the neocortex is a prediction machine and that human intelligence amounts to the capability to anticipate and adapt. This research is a key step towards artificial general intelligence.

Continuous Adaptation via Meta-Learning in Nonstationary and Competitive Environments

From [1710.03641] Continuous Adaptation via Meta-Learning in Nonstationary and Competitive Environments

Ability to continuously learn and adapt from limited experience in nonstationary environments is an important milestone on the path towards general intelligence. In this paper, we cast the problem of continuous adaptation into the learning-to-learn framework. We develop a simple gradient-based meta-learning algorithm suitable for adaptation in dynamically changing and adversarial scenarios. Additionally, we design a new multi-agent competitive environment, RoboSumo, and define iterated adaptation games for testing various aspects of continuous adaptation strategies. We demonstrate that meta-learning enables significantly more efficient adaptation than reactive baselines in the few-shot regime. Our experiments with a population of agents that learn and compete suggest that meta-learners are the fittest.

“Be careful; things can be worse than they appear”: Understanding Biased Algorithms and Users’ Behavior around Them in Rating Platforms

From http://social.cs.uiuc.edu/papers/ICWSM17-PrePrint.pdf

Awareness of bias in algorithms is growing among scholars and users of algorithmic systems. But what can we observe about how users discover and behave around such biases?
We used a cross-platform audit technique that analyzed online ratings of 803 hotels across three hotel rating platforms and found that one site’s algorithmic rating system biased ratings, particularly low-to-medium quality hotels, significantly higher than others (up to 37%).

Analyzing reviews of 162 users who independently discovered this bias, we seek to understand if, how, and in what ways users perceive and manage this bias. Users changed the typical ways they used a review on a hotel rating platform to instead discuss the rating system itself and raise other users’ awareness of the rating bias. This raising of awareness included practices like efforts to reverse engineer the rating algorithm, efforts to correct the bias, and demonstrations of broken trust.

We conclude with a discussion of how such behavior patterns might inform design approaches that anticipate unexpected bias and provide reliable means for meaningful bias discovery and response.

The hippocampus as a predictive map

From The hippocampus as a predictive map : Nature

A cognitive map has long been the dominant metaphor for hippocampal function, embracing the idea that place cells encode a geometric representation of space. However, evidence for predictive coding, reward sensitivity and policy dependence in place cells suggests that the representation is not purely spatial. We approach this puzzle from a reinforcement learning perspective: what kind of spatial representation is most useful for maximizing future reward? We show that the answer takes the form of a predictive representation. This representation captures many aspects of place cell responses that fall outside the traditional view of a cognitive map. Furthermore, we argue that entorhinal grid cells encode a low-dimensionality basis set for the predictive representation, useful for suppressing noise in predictions and extracting multiscale structure for hierarchical planning.

DeepMind thinks that the hippocampus summarizes future events using a “predictive map”

From The hippocampus as a ‘predictive map’ | DeepMind

Our insights were derived from reinforcement learning, the subdiscipline of AI research that focuses on systems that learn by trial and error. The key computational idea we drew on is that to estimate future reward, an agent must first estimate how much immediate reward it expects to receive in each state, and then weight this expected reward by how often it expects to visit that state in the future. By summing up this weighted reward across all possible states, the agent obtains an estimate of future reward.

Similarly, we argue that the hippocampus represents every situation – or state – in terms of the future states which it predicts. For example, if you are leaving work (your current state) your hippocampus might represent this by predicting that you will likely soon be on your commute, picking up your kids from school or, more distantly, at home. By representing each current state in terms of its anticipated successor states, the hippocampus conveys a compact summary of future events, known formally as the “successor representation”. We suggest that this specific form of predictive map allows the brain to adapt rapidly in environments with changing rewards, but without having to run expensive simulations of the future.

I wonder what Jeff Hawkins thinks about this new theory.

Why Does the Neocortex Have Columns, A Theory of Learning the Structure of the World

From Why Does the Neocortex Have Columns, A Theory of Learning the Structure of the World | bioRxiv

Neocortical regions are organized into columns and layers. Connections between layers run mostly perpendicular to the surface suggesting a columnar functional organization. Some layers have long-range excitatory lateral connections suggesting interactions between columns. Similar patterns of connectivity exist in all regions but their exact role remain a mystery. In this paper, we propose a network model composed of columns and layers that performs robust object learning and recognition. Each column integrates its changing input over time to learn complete predictive models of observed objects. Excitatory lateral connections across columns allow the network to more rapidly infer objects based on the partial knowledge of adjacent columns. Because columns integrate input over time and space, the network learns models of complex objects that extend well beyond the receptive field of individual cells. Our network model introduces a new feature to cortical columns. We propose that a representation of location relative to the object being sensed is calculated within the sub-granular layers of each column. The location signal is provided as an input to the network, where it is combined with sensory data. Our model contains two layers and one or more columns. Simulations show that using Hebbian-like learning rules small single-column networks can learn to recognize hundreds of objects, with each object containing tens of features. Multi-column networks recognize objects with significantly fewer movements of the sensory receptors. Given the ubiquity of columnar and laminar connectivity patterns throughout the neocortex, we propose that columns and regions have more powerful recognition and modeling capabilities than previously assumed.

Not on artificial intelligence per se, but Jeff Hawkins was the first to suggest a unifying theory of how the brain works in 2005 with his book On Intelligence. It’s interesting to see how the theory has been refined in last 12 years and how it might influence today’s development of AI algorithms.

Chinese state plan to dominate AI by 2030

From China’s Plan to ‘Lead’ in AI: Purpose, Prospects, and Problems

The plan prescribes a high level of government investment in theoretical and applied AI breakthroughs (see Part III below for more), while also acknowledging that, in China as around the world, private companies are currently leading the charge on commercial applications of AI.

The plan acknowledges, meanwhile, that China remains far behind world leaders in development of key hardware enablers of AI, such as microchips suited for machine learning use (e.g., GPUs or re-configurable processors). The plan’s ambition is underlined by its recognition of the hard road ahead.

and

China is embarking upon an agenda of “intelligentization” (智能化), seeking to take advantage of the transformative potential of AI throughout society, the economy, government, and the military. Through this new plan, China intends to pursue “indigenous innovation” in the “strategic frontier” technology of AI in furtherance of a national strategy for innovation-driven development.

the Chinese government is encouraging its own AI enterprises to pursue an approach of “going out,” including through overseas mergers and acquisitions, equity investments, and venture capital, as well as the establishment of research and development centers abroad.

China plans to develop resources and ecosystems conducive to the goal of becoming a “premier innovation center” in AI science and technology by 2030. In support of this goal, the plan calls for an “open source and open” approach that takes advantage of synergies among industry, academia, research, and applications, including through creating AI “innovation clusters.”

and

the Chinese leadership wants to ensure that advances in AI can be leveraged for national defense, through a national strategy for military-civil fusion (军民融合). According to the plan, resources and advances will be shared and transferred between civilian and military contexts. This will involve the establishment and normalizing of mechanisms for communication and coordination among scientific research institutes, universities, enterprises, and military industry.

Full translation of China’s State Council Notice on the Issuance of the Next Generation Artificial Intelligence Development Plan – Both the original document and the commentary on NewAmerica are critical reads.

Startup generates and sells synthetic data for AI training

From Home – Neuromation

We propose a solution whose accuracy is guaranteed by construction: synthesizing large datasets along with perfectly accurate labels. The benefits of synthetic data are manifold. It is fast to synthesize and render, perfectly accurate, tailored for the task at hand, and can be modified to improve the model and training itself. It is important to note that real data with accurate labels is still required for evaluating models trained on synthetic data, in order to guarantee acceptable performance at inference time. However, the amount of validation data required is orders of magnitude smaller than training data!

They generate and sell synthetic datasets for AI training. All data is charged per item, and comes pre-labelled.

All transactions get done using Ethereum extended ECR-20 compliant token. People can mine tokens by performing computationally intensive tasks of data generation and model training instead of mining cryptocurrency.

Nick Bostrom joins newly formed Ethics & Society research group at DeepMind

From DeepMind launches new research team to investigate AI ethics – The Verge

Google’s AI subsidiary DeepMind is getting serious about ethics. The UK-based company, which Google bought in 2014, today announced the formation of a new research group dedicated to the thorniest issues in artificial intelligence. These include the problems of managing AI bias; the coming economic impact of automation; and the need to ensure that any intelligent systems we develop share our ethical and moral values.

DeepMind Ethics & Society (or DMES, as the new team has been christened) will publish research on these topics and others starting early 2018. The group has eight full-time staffers at the moment, but DeepMind wants to grow this to around 25 in a year’s time. The team has six unpaid external “fellows” (including Oxford philosopher Nick Bostrom, who literally wrote the book on AI existential risk) and will partner with academic groups conducting similar research, including The AI Now Institute at NYU, and the Leverhulme Centre for the Future of Intelligence.

Great effort. I’d love to attend a conference arranged by groups like this one.

Information Bottleneck Theory might explain how deep (and human) learning works

From New Theory Cracks Open the Black Box of Deep Learning | Quanta Magazine

Experts wonder what it is about deep learning that enables generalization — and to what extent brains apprehend reality in the same way.

Last month, a YouTube video of a conference talk in Berlin, shared widely among artificial-intelligence researchers, offered a possible answer. In the talk, Naftali Tishby, a computer scientist and neuroscientist from the Hebrew University of Jerusalem, presented evidence in support of a new theory explaining how deep learning works. Tishby argues that deep neural networks learn according to a procedure called the “information bottleneck,” which he and two collaborators first described in purely theoretical terms in 1999. The idea is that a network rids noisy input data of extraneous details as if by squeezing the information through a bottleneck, retaining only the features most relevant to general concepts.

and

According to Tishby, who views the information bottleneck as a fundamental principle behind learning, whether you’re an algorithm, a housefly, a conscious being, or a physics calculation of emergent behavior, that long-awaited answer “is that the most important part of learning is actually forgetting.”

but

Brenden Lake, an assistant professor of psychology and data science at New York University who studies similarities and differences in how humans and machines learn, said that Tishby’s findings represent “an important step towards opening the black box of neural networks,” but he stressed that the brain represents a much bigger, blacker black box. Our adult brains, which boast several hundred trillion connections between 86 billion neurons, in all likelihood employ a bag of tricks to enhance generalization, going beyond the basic image- and sound-recognition learning procedures that occur during infancy and that may in many ways resemble deep learning.

For instance, Lake said the fitting and compression phases that Tishby identified don’t seem to have analogues in the way children learn handwritten characters, which he studies. Children don’t need to see thousands of examples of a character and compress their mental representation over an extended period of time before they’re able to recognize other instances of that letter and write it themselves. In fact, they can learn from a single example.

The video is here.

Top academic and industry minds in a panel about the future of AI

From Superintelligence: Science or Fiction? | Elon Musk & Other Great Minds – YouTube

From left to right: Elon Musk (Tesla, SpaceX), Stuart Russell (University Berkeley), Bart Selman (Cornell University), Ray Kurzweil (Google, inventor, futurist), David Chalmers (New York University, Australian National University, philosopher), Nick Bostrom (University of Oxford, philosopher), Demis Hassabis (DeepMind), Sam Harris (author, philosopher, neuroscientist, atheist), and Jaan Tallinn (Skype, Kaaza) discuss with Max Tegmark (moderator) what likely outcomes might be if we succeed in building human-level AGI, and also what we would like to happen.

Max Tegmark put in a room some of the brightest minds of our times to discuss Artificial General Intelligence and Superintelligence. This is the video of the most significative panel at that event, the Beneficial AI 2017 conference.

It’s 1h video, totally worth your time.

On Cognitive Computing vs Artificial Intelligence

From Ginni Rometty on Artificial Intelligence – Bloomberg

Doctors don’t want black-and-white answers, nor does any profession. If you’re a professional, my guess is when you interact with AI, you don’t want it to say, “Here is an answer.” What a doctor wants is, “OK, give me the possible answers. Tell my why you believe it. Can I see the research, the evidence, the ‘percent confident’? What more would you like to know?”

and

When I went to Davos in January, we published something called Transparency and Trust in the Cognitive Era. It’s our responsibility if we build this stuff to guide it safely into the world. First, be clear on the purpose, work with man. We aren’t out here to destroy man. The second is to be transparent about who trained the computers, who are the experts, where did the data come from. And when consumers are using AI, you inform them that they are and inform the company as well that owns the intellectual property. And the third thing is to be committed to skill.

IBM and its term “cognitive computing” are all about so-called “weak AI”. The problem is that giving the insight about an answer is incredibly challenging at the moment vs just giving the answer in a black-box fashion.

The electromagnetic spectrum is now the new high ground on the battlefield

From Artificial Intelligence Could Help Neutralize Enemy Bombs

Capt. Scott Kraft, commanding officer at the Naval Surface Warfare Center Indian Head technology division in Maryland, said artificial intelligence and big data analytics could potentially help technicians more quickly recognize exactly what type of bomb they are dealing with and choose the best option for neutralizing it. The vast amount of data collected during the past 16 years of war could be exploited to make faster decisions in combat situations, he said.

and

AI could also help EOD forces defeat electronic warfare threats by detecting sources of transmission and interference, officials said.

“The electromagnetic spectrum is now the new high ground on the battlefield,” Young said. U.S. troops “have to have situational awareness of it, what’s happening and why, and if we don’t we’re going to be at a disadvantage.”

Signals interference can impede the operations of robots and other EOD tools.

“If you’ve been to theater lately … you’ve heard about a lot of the counter-UAS systems along with all the jammers, along with all the electronic warfare systems,” Young said.

“It becomes very complex. So we want to try to simplify that” for operators that aren’t EW experts, Young said.

The whole article is about artificial intelligence and drone technologies applied to explosive ordnance disposal. However, reading it, it’s easy to see how AI is considered a strategic weapon and could be used for many applications, not just improvised explosive device (IED) discovery and disposal. And some military organizations have very large data sets to train AI.

The possible applications go all the way to the supersoldier scenarios, as I heard from at least one startup.

No surprise Putin said that whoever leads in AI will rule the world.

Real-time people and object recognition for check-out at a retail shop

From Autonomous Checkout, Real Time System v0.21 – YouTube

This is a real time demonstration of our autonomous checkout system, running at 30 FPS. This system includes our models for person detection, entity tracking, item detection, item classification, ownership resolution, action analysis, and shopper inventory analysis, all working together to visualize which person has what item in real time.

A few days ago, I shared a TED Talk about real-time face recognition. It was impressive. What I am sharing right now is even more impressive: real-time people and object recognition during online shopping.

Online shopping is just one (very lucrative) application. The technology shown in this video has been developed by a company called Standard Cognition, but it’s very likely similar to the one that Amazon is testing in their first retail shop.

Of course, there are many other applications, like surveillance for law enforcement, or information gathering for “smart communication”. Imagine this technology used in augmented reality.

Once smart contact lenses will be a reality, this will be inevitable.

AI can correctly distinguish between gay and heterosexual men in 81% of cases, and in 74% of cases for women

From Deep neural networks are more accurate than humans at detecting sexual orientation from facial images | PsyArXiv Preprints

We show that faces contain much more information about sexual orientation than can be perceived and interpreted by the human brain. We used deep neural networks to extract features from 35,326 facial images. These features were entered into a logistic regression aimed at classifying sexual orientation.

Given a single facial image, a classifier could correctly distinguish between gay and heterosexual men in 81% of cases, and in 74% of cases for women. Human judges achieved much lower accuracy: 61% for men and 54% for women. The accuracy of the algorithm increased to 91% and 83%, respectively, given five facial images per person. Facial features employed by the classifier included both fixed (e.g., nose shape) and transient facial features (e.g., grooming style).

Consistent with the prenatal hormone theory of sexual orientation, gay men and women tended to have gender-atypical facial morphology, expression, and grooming styles. Prediction models aimed at gender alone allowed for detecting gay males with 57% accuracy and gay females with 58% accuracy.

Those findings advance our understanding of the origins of sexual orientation and the limits of human perception. Additionally, given that companies and governments are increasingly using computer vision algorithms to detect people’s intimate traits, our findings expose a threat to the privacy and safety of gay men and women.

Let me reiterate: The accuracy of the algorithm increased to 91% and 83%, respectively, given five facial images per person.

Imagine if this analysis would be incorporated into the hiring process and used to discriminate candidates.

I think that the algorithms can be biased, harmful, and even deadly

From Pioneering computer scientist calls for National Algorithm Safety Board | Techworld

Renowned computer scientist Ben Shneiderman has a plan on how to ensure algorithmic accountability. The University of Maryland professor and founder of its Human-Computer Interaction Lab outlined his strategy at the 2017 Turing Lecture on Tuesday.”What I’m proposing is a National Algorithm Safety Board,” Shneiderman told the audience in London’s British Library.The board would provide three forms of independent oversight: planning, continuous monitoring, and retrospective analysis. Combined they provide a basis to ensure the correct system is selected then supervised and lessons can be learnt to make better algorithms in future.

The story of Ferguson wasn’t algorithm-friendly. It’s not “likable.”

From Zeynep Tufekci: Machine intelligence makes human morals more important | TED Talk

Machine intelligence is here, and we’re already using it to make subjective decisions. But the complex way AI grows and improves makes it hard to understand and even harder to control. In this cautionary talk, techno-sociologist Zeynep Tufekci explains how intelligent machines can fail in ways that don’t fit human error patterns — and in ways we won’t expect or be prepared for. “We cannot outsource our responsibilities to machines,” she says. “We must hold on ever tighter to human values and human ethics.”

Another exceptional TED Talk.

Modern AIs do not read, do not understand. They only disguise as if they do.

From Noriko Arai: Can a robot pass a university entrance exam? | TED Talk

Meet Todai Robot, an AI project that performed in the top 20 percent of students on the entrance exam for the University of Tokyo — without actually understanding a thing. While it’s not matriculating anytime soon, Todai Robot’s success raises alarming questions for the future of human education. How can we help kids excel at the things that humans will always do better than AI?

The key idea of this beautiful talk:

we humans can understand the meaning. That is something which is very, very lacking in AI. But most of the students just pack the knowledge without understanding the meaning of the knowledge, so that is not knowledge, that is just memorizing, and AI can do the same thing. So we have to think about a new type of education.

Whoever becomes the leader in AI will become the ruler of the world

From ‘Whoever leads in AI will rule the world’: Putin to Russian children on Knowledge Day — RT News

“Artificial intelligence is the future, not only for Russia, but for all humankind. It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world,” Russian President Vladimir Putin said.

However, the president said he would not like to see anyone “monopolize” the field.

“If we become leaders in this area, we will share this know-how with entire world, the same way we share our nuclear technologies today,” he told students from across Russia via satellite link-up, speaking from the Yaroslavl region.

Elon Musk replies to this specific article on Twitter:

It begins ..

and

China, Russia, soon all countries w strong computer science. Competition for AI superiority at national level most likely cause of WW3 imo.

Just like a small team of 5 plus AI could overturn the market, a small, weak government plus AI could overturn the geopolitical scene. And human augmentation is a key milestone to accomplish that. I already heard multiple companies I mentioned here on H+ having collaboration with military and government agencies.

Machine-learning software didn’t just mirror those biases, it amplified them

From Machines Learn a Biased View of Women | WIRED

…Ordóñez wondering whether he and other researchers were unconsciously injecting biases into their software. So he teamed up with colleagues to test two large collections of labeled photos used to “train” image-recognition software.

Their results are illuminating. Two prominent research-image collections—including one supported by Microsoft and Facebook—display a predictable gender bias in their depiction of activities such as cooking and sports. Images of shopping and washing are linked to women, for example, while coaching and shooting are tied to men.

Machine-learning software trained on the datasets didn’t just mirror those biases, it amplified them. If a photo set generally associated women with cooking, software trained by studying those photos and their labels created an even stronger association.

Bias in artificial general intelligence may lead to catastrophic outcomes, but even the bias in “weak AI”, designed to just assist and expand human intelligence, poses a significant risk.

Perception of augmented humans might be more distorted than ever.

Lethal autonomous weapons threaten to become the third revolution in warfare

From Killer robots: World’s top AI and robotics companies urge United Nations to ban lethal autonomous weapons – Future of Life Institute

An open letter signed by 116 founders of robotics and artificial intelligence companies from 26 countries urges the United Nations to urgently address the challenge of lethal autonomous weapons (often called ‘killer robots’) and ban their use internationally.

In December 2016, 123 member nations of the UN’s Review Conference of the Convention on Conventional Weapons unanimously agreed to begin formal discussions on autonomous weapons. Of these, 19 have already called for an outright ban.

and

Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend.

PathNet: Evolution Channels Gradient Descent in Super Neural Networks

From PathNet: Evolution Channels Gradient Descent in SuperNeural Networks

For artificial general intelligence (AGI) it would be efficient if multiple users trained the same giant neural network, permitting parameter reuse, without catastrophic forgetting.
PathNet is a first step in this direction. It is a neural network algorithm that uses agents embedded in the neural network whose task is to discover which parts of the network to re-use for new tasks.

Agents are pathways (views) through the network which determine the subset of parameters that are used and updated by the forwards and backwards passes of the backpropogation algorithm. During learning, a tournament selection genetic algorithm is used to select pathways through the neural network for replication and mutation. Pathway fitness is the performance of that pathway measured according to a cost function.

We demonstrate successful transfer learning; fixing the parameters along a path learned on task A and re-evolving a new population of paths for task B, allows task B to be learned faster than it could be learned from scratch or after fine-tuning. Paths evolved on task B re-use parts of the optimal path evolved on task A.

Explosion AI releases a free annotation tool for data scientists

From Prodigy: A new tool for radically efficient machine teaching | Explosion AI

Prodigy addresses the big remaining problem: annotation and training. The typical approach to annotation forces projects into an uncomfortable waterfall process. The experiments can’t begin until the first batch of annotations are complete, but the annotation team can’t start until they receive the annotation manuals. To produce the annotation manuals, you need to know what statistical models will be required for the features you’re trying to build. Machine learning is an inherently uncertain technology, but the waterfall annotation process relies on accurate upfront planning. The net result is a lot of wasted effort.

Prodigy solves this problem by letting data scientists conduct their own annotations, for rapid prototyping. Ideas can be tested faster than the first planning meeting could even be scheduled. We also expect Prodigy to reduce costs for larger projects, but it’s the increased agility we’re most excited about. Data science projects are said to have uneven returns, like start-ups: a minority of projects are very successful, recouping costs for a larger number of failures. If so, the most important problem is to find more winners. Prodigy helps you do that, because you get to try things much faster.

How AI could learn about human behavior from YouTube

From Joseph Redmon: How computers learn to recognize objects instantly | TED.com

Joseph Redmon works on the YOLO (You Only Look Once) system, an open-source method of object detection that can identify objects in images and video — from zebras to stop signs — with lightning-quick speed. In a remarkable live demo, Redmon shows off this important step forward for applications like self-driving cars, robotics and even cancer detection.

A few years ago, on my personal Twitter account, I suggested that Google side benefit of owning YouTube would be having the largest archive of human activities on video to train its AI. What Redmon did here is what I had in mind at that time.

By the way, the demonstration during the TED talk is impressive.

Ray Kurzweil on augmenting the human brain through AI, nanorobotics and cloud computing

From Ray Kurzweil: Get ready for hybrid thinking | TED.com

Two hundred million years ago, our mammal ancestors developed a new brain feature: the neocortex. This stamp-sized piece of tissue (wrapped around a brain the size of a walnut) is the key to what humanity has become. Now, futurist Ray Kurzweil suggests, we should get ready for the next big leap in brain power, as we tap into the computing power in the cloud.

Speaking of AI augmenting human intelligence rather than replacing, Ray Kurzweil popularized the idea in 2014 suggesting that nanorobotics could do the trick in just a few decades.

Remember that he works for Google.

Apple sees AI as an augmentation of human intelligence, not a replacement

From Tom Gruber: How AI can enhance our memory, work and social lives | TED.com

Tom Gruber, co-creator of Siri, wants to make “humanistic AI” that augments and collaborates with us instead of competing with (or replacing) us. He shares his vision for a future where AI helps us achieve superhuman performance in perception, creativity and cognitive function — from turbocharging our design skills to helping us remember everything we’ve ever read and the name of everyone we’ve ever met.

The video is short but gives a very clear idea of how Apple is thinking about AI and what the future applications could be.

We try to engineer AI without understanding intelligence or cognition first

From What an artificial intelligence researcher fears about AI

…as AI designs get even more complex and computer processors even faster, their skills will improve. That will lead us to give them more responsibility, even as the risk of unintended consequences rises. We know that “to err is human,” so it is likely impossible for us to create a truly safe system.

and

We could set up our virtual environments to give evolutionary advantages to machines that demonstrate kindness, honesty and empathy. This might be a way to ensure that we develop more obedient servants or trustworthy companions and fewer ruthless killer robots.

and

As researchers, and as a society, we have not yet come up with a clear idea of what we want AI to do or become. In part, of course, this is because we don’t yet know what it’s capable of.

Wonderful blog post. Artificial intelligence experts face scientific, legal, moral and ethical dilemmas like no other expert before in our history.

Facebook AI Evolves Its Language from Plain English To Something New

From AI Is Inventing Languages Humans Can’t Understand. Should We Stop It?

At first, they were speaking to each other in plain old English. But then researchers realized they’d made a mistake in programming.

“There was no reward to sticking to English language,” says Dhruv Batra, visiting research scientist from Georgia Tech at Facebook AI Research (FAIR). As these two agents competed to get the best deal–a very effective bit of AI vs. AI dogfighting researchers have dubbed a “generative adversarial network”–neither was offered any sort of incentive for speaking as a normal person would. So they began to diverge, eventually rearranging legible words into seemingly nonsensical sentences.

and

Should we allow AI to evolve its dialects for specific tasks that involve speaking to other AIs? To essentially gossip out of our earshot? Maybe; it offers us the possibility of a more interoperable world, a more perfect place where iPhones talk to refrigerators that talk to your car without a second thought.

The tradeoff is that we, as humanity, would have no clue what those machines were actually saying to one another.

What if artificial intelligence would help humans to develop a more efficient, universal language?

AI and machine learning algorithms helped predict instances of schizophrenia with 74% accuracy

From IBM News room – 2017-07-21 IBM and University of Alberta Publish New Data on Machine Learning Algorithms to Help Predict Schizophrenia

In the paper, researchers analyzed de-identified brain functional Magnetic Resonance Imaging (fMRI) data from the open data set, Function Biomedical Informatics Research Network (fBIRN) for patients with schizophrenia and schizoaffective disorders, as well as a healthy control group. fMRI measures brain activity through blood flow changes in particular areas of the brain.

Specifically, the fBIRN data set reflects research done on brain networks at different levels of resolution, from data gathered while study participants conducted a common auditory test. Examining scans from 95 participants, researchers used machine learning techniques to develop a model of schizophrenia that identifies the connections in the brain most associated with the illness.

Vicarious gets another $50M to attempt building artificial general intelligence

From Khosla Ventures leads $50 million investment in Vicarious’ AI tech | VentureBeat | Entrepreneur | by Bérénice Magistretti

The Union City, California-based startup is using computational neuroscience to build better machine learning models that help robots quickly address a wide variety of tasks. Vicarious focuses on the neocortex, a part of the brain concerned with sight and hearing.

“We aren’t trying to emulate the brain exactly,” wrote Vicarious cofounder and CEO Scott Phoenix, in an email to VentureBeat. “A good way to think about it is airplanes and birds. When building a plane, you want to borrow relevant features from birds, like low body weight and deformable wings, without getting into irrelevant details like feather colors and flapping.”

I think this quote is deeply inspired by the book Superintelligence by Nick Bostrom. Which is not surprising as Vicarous is trying to build the holy grail of AI: an artificial general intelligence.

They have the most impressive list of investors I have seen in a long time.

And yet, AI is easier to trick than people think

From Robust Adversarial Examples

We’ve created images that reliably fool neural network classifiers when viewed from varied scales and perspectives. This challenges a claim from last week that self-driving cars would be hard to trick maliciously since they capture images from multiple scales, angles, perspectives, and the like.

This innocuous kitten photo, printed on a standard color printer, fools the classifier into thinking it’s a monitor or desktop computer regardless of how its zoomed or rotated. We expect further parameter tuning would also remove any human-visible artifacts.

Watch the videos.

I have never seen Elon Musk so concerned about AI

From Elon Musk says we need to regulate AI before it becomes a danger to humanity – The Verge

“I have exposure to the very cutting edge AI, and I think people should be really concerned about it,” Musk told attendees at the National Governors Association Summer Meeting on Saturday. “I keep sounding the alarm bell, but until people see robots going down the street killing people, they don’t know how to react, because it seems so ethereal.”

The solution, says Musk, is regulation: “AI is a rare case where we need to be proactive about regulation instead of reactive. Because I think by the time we are reactive in AI regulation, it’s too late.” He added that what he sees as the current model of regulation, in which governments step in only after “a whole bunch of bad things happen,” is inadequate for AI because the technology represents “a fundamental risk to the existence of civilization.”

He doesn’t hold words anymore. He must have seen something that truly terrified him.

The full video is here: https://www.youtube.com/watch?v=2C-A797y8dA

Microsoft released a smartphone app that uses computer vision to describe the world for the visually impaired

From Microsoft’s new iPhone app narrates the world for blind people – The Verge

With the app downloaded, the users can point their phone’s camera at a person and it’ll say who they are and how they’re feeling; they can point it at a product and it’ll tell them what it is. All using artificial intelligence that runs locally on their phone.

The app works in a number of scenarios. As well as recognizing people its seen before and guessing strangers’ age and emotion, it can identify household products by scanning barcodes. It also reads and scan documents, and recognizes US currency.

Imagine if this would be the key function of an earpiece like the Waverly Labs one.

Bark.us uses AI to analyze kids online activity and alert about challenging situations

From Bark.us saves teens’ lives by using AI to analyze their online activity | VentureBeat | Bots | by Khari Johnson

Bark.us uses machine learning and statistical analysis to crawl conversations teens have on email, SMS, and platforms like Snapchat, Instagram, and WhatsApp. Analysis is performed to determine if a kid is suffering from cyberbullying, suicidal thoughts, possible depression, hate speech, or other attacks that can happen online without a parent or guardian aware anything is happening. Sexting and drug usage are also flagged. When signs of alarm are recognized, Bark alerts parents via text or email, then suggests potential next steps.

This sounds more like controlling than saving lives, but it might be a first step in the right direction.

What if, rather than alerting parents, this technology would be integrated with a biohacking solution to improve how kids react to life challenges?

Google launches a dedicated fund for AI-first startups

From Anna Patterson talks Gradient Ventures, Google’s new AI fund | TechCrunch

It’s been pretty obvious for a few months now, but Google has finally admitted that it’s running its own investment fund targeting machine intelligence startups. The fund will go by the name Gradient Ventures and provide capital, resources and education to AI-first startups.

Google isn’t disclosing the size of the fund, but the company told us that it’s being run directly off of Google’s balance sheet and will have the flexibility to follow on when it makes sense. This is in contrast to GV (formally Google Ventures) and Capital G, which operate as independent funds.

AI is the first technology in a long time posing a real threat to Google dominance. In other words, artificial intelligence is the best bet for a newcomer to become the next Google. No surprise Google wants to spot that newcomer as early as possible.

Google launches PAIR research initiative to study how humans interact with AI

From PAIR: the People + AI Research Initiative

Today we’re announcing the People + AI Research initiative (PAIR) which brings together researchers across Google to study and redesign the ways people interact with AI systems. The goal of PAIR is to focus on the “human side” of AI: the relationship between users and technology, the new applications it enables, and how to make it broadly inclusive. The goal isn’t just to publish research; we’re also releasing open source tools for researchers and other experts to use.

Ethics and Governance AI Fund gives $7.6M to nine research organizations

From Ethics and Governance AI Fund funnels $7.6M to Harvard, MIT and independent research efforts | TechCrunch

A $27 million fund aimed at applying artificial intelligence to the public interest has announced the first targets for its beneficence: $7.6 million will be split unequally between MIT’s Media Lab, Harvard’s Berkman Klein Center and seven smaller research efforts around the world.

The Ethics and Governance of Artificial Intelligence Fund was created by Reid Hoffman, Pierre Omidyar and the Knight Foundation back in January; the intention was to ensure that “social scientists, ethicists, philosophers, faith leaders, economists, lawyers and policymakers” have a say in how AI is developed and deployed.

Google couldn’t see what we see with glasses, so they are trying through our smartphones

From Google Lens offers a snapshot of the future for augmented reality and AI | AndroidAuthority

At the recent I/0 2017, Google stated that we were at an inflexion point with vision. In other words, it’s now more possible than ever before for a computer to look at a scene and dig out the details and understand what’s going on. Hence: Google Lens.This improvement comes courtesy of machine learning, which allows companies like Google to acquire huge amounts of data and then create systems that utilize that data in useful ways. This is the same technology underlying voice assistants and even your recommendations on Spotify to a lesser extent.

Students use AI to do math homework – Assisted Intelligence?

From Wolfram Alpha Is Making It Extremely Easy for Students to Cheat | WIRED

Still, the prevailing notion that Wolfram|Alpha is a form of cheating doesn’t appear to be dissipating. Much of this comes down to what homework is. If the purpose of homework is build greater understanding of concepts as presented in class, Joyce is adamant that teachers should view Wolfram|Alpha as an asset. It’s not that Wolfram Alpha has helped students “‘get through’ a math class by doing their homework for them,” he says, “but that we helped them actually understand what they were doing” in the first place. Dixon believes that Wolfram|Alpha can build confidence in students who don’t see themselves as having mathematical minds. Homework isn’t really about learning to do a calculation, but rather about learning to find and understand an answer regardless of how the calculation is executed.

That’s the route down which education appears to be headed. Once upon a time, education was all about packing as much information as possible into a human brain. Information was limited and expensive, and the smartest people were effectively the deepest and most organized filing cabinets. Today, it’s the opposite.“The notion of education as a transfer of information from experts to novices—and asking the novices to repeat that information, regurgitate it on command as proof that they have learned it—is completely disconnected from the reality of 2017,” says David Helfand, a Professor of Astronomy at Columbia University.

Questions:

  • Will AI make humans smarter or dumber?
  • How is this different from a surgeon using AI-powered AR goggles to perform surgery?

A team of five (plus AI) against tech behemoths

From The dangers of letting Big Tech control AI

AI certainly has many applications beyond the business needs of a few of black-hole tech platforms. We’ve reached an exciting time when emerging technologies are facilitating smarter, faster, and better processes at increasingly lower costs, which is opening up the playing field to smaller, leaner players. It will become more and more common to see five-person startups go up against the tech behemoths.

AI as a better cardiologist

From The Machines Are Getting Ready to Play Doctor

The researchers partnered with iRhythm, a company that makes portable ECG devices. They collected 30,000 30-second clips from patients with different forms of arrhythmia. To assess the accuracy of their algorithm, the team compared its performance to that of five different cardiologists on 300 undiagnosed clips. They had a panel of three expert cardiologists provide a ground-truth judgment.

Deep learning involves feeding large quantities of data into a big simulated neural network, and fine-tuning its parameters until it accurately recognized problematic ECG signals. The approach has proven adept at identifying complex patterns in images and audio, and it has led to the development of better-than-human image-recognition and voice-recognition systems.

Eric Horvitz, managing director of Microsoft Research and both a medical doctor and an expert on machine learning, says others, including two different groups from MIT and the University of Michigan, are applying machine learning to the detection of heart arrhythmias.

AI and wearables powering the Post-Truth Era

From Anti AI AI — Wearable Artificial Intelligence – DT R&D

Near the end of 2017 we’ll be consuming content synthesised to mimic real people. Leaving us in a sea of disinformation powered by AI and machine learning. The media, giant tech corporations and citizens already struggle to discern fact from fiction. And as this technology is democratised it will be even more prevalent.

Preempting this we prototyped a device worn on the ear and connected to a neural net trained on real and synthetic voices called Anti AI AI. The device notifies the wearer when a synthetic voice is detected and cools the skin using a thermoelectric plate to alert the wearer the voice they are hearing was synthesised: by a cold, lifeless machine.

Mind-blowing.