From Killer robots: World’s top AI and robotics companies urge United Nations to ban lethal autonomous weapons – Future of Life Institute
An open letter signed by 116 founders of robotics and artificial intelligence companies from 26 countries urges the United Nations to urgently address the challenge of lethal autonomous weapons (often called ‘killer robots’) and ban their use internationally.
In December 2016, 123 member nations of the UN’s Review Conference of the Convention on Conventional Weapons unanimously agreed to begin formal discussions on autonomous weapons. Of these, 19 have already called for an outright ban.
Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend.
From PathNet: Evolution Channels Gradient Descent in SuperNeural Networks
For artificial general intelligence (AGI) it would be efficient if multiple users trained the same giant neural network, permitting parameter reuse, without catastrophic forgetting.
PathNet is a first step in this direction. It is a neural network algorithm that uses agents embedded in the neural network whose task is to discover which parts of the network to re-use for new tasks.
Agents are pathways (views) through the network which determine the subset of parameters that are used and updated by the forwards and backwards passes of the backpropogation algorithm. During learning, a tournament selection genetic algorithm is used to select pathways through the neural network for replication and mutation. Pathway fitness is the performance of that pathway measured according to a cost function.
We demonstrate successful transfer learning; fixing the parameters along a path learned on task A and re-evolving a new population of paths for task B, allows task B to be learned faster than it could be learned from scratch or after fine-tuning. Paths evolved on task B re-use parts of the optimal path evolved on task A.
From Prodigy: A new tool for radically efficient machine teaching | Explosion AI
Prodigy addresses the big remaining problem: annotation and training. The typical approach to annotation forces projects into an uncomfortable waterfall process. The experiments can’t begin until the first batch of annotations are complete, but the annotation team can’t start until they receive the annotation manuals. To produce the annotation manuals, you need to know what statistical models will be required for the features you’re trying to build. Machine learning is an inherently uncertain technology, but the waterfall annotation process relies on accurate upfront planning. The net result is a lot of wasted effort.
Prodigy solves this problem by letting data scientists conduct their own annotations, for rapid prototyping. Ideas can be tested faster than the first planning meeting could even be scheduled. We also expect Prodigy to reduce costs for larger projects, but it’s the increased agility we’re most excited about. Data science projects are said to have uneven returns, like start-ups: a minority of projects are very successful, recouping costs for a larger number of failures. If so, the most important problem is to find more winners. Prodigy helps you do that, because you get to try things much faster.
From Joseph Redmon: How computers learn to recognize objects instantly | TED.com
Joseph Redmon works on the YOLO (You Only Look Once) system, an open-source method of object detection that can identify objects in images and video — from zebras to stop signs — with lightning-quick speed. In a remarkable live demo, Redmon shows off this important step forward for applications like self-driving cars, robotics and even cancer detection.
A few years ago, on my personal Twitter account, I suggested that Google side benefit of owning YouTube would be having the largest archive of human activities on video to train its AI. What Redmon did here is what I had in mind at that time.
By the way, the demonstration during the TED talk is impressive.
From Ray Kurzweil: Get ready for hybrid thinking | TED.com
Two hundred million years ago, our mammal ancestors developed a new brain feature: the neocortex. This stamp-sized piece of tissue (wrapped around a brain the size of a walnut) is the key to what humanity has become. Now, futurist Ray Kurzweil suggests, we should get ready for the next big leap in brain power, as we tap into the computing power in the cloud.
Speaking of AI augmenting human intelligence rather than replacing, Ray Kurzweil popularized the idea in 2014 suggesting that nanorobotics could do the trick in just a few decades.
Remember that he works for Google.
From Tom Gruber: How AI can enhance our memory, work and social lives | TED.com
Tom Gruber, co-creator of Siri, wants to make “humanistic AI” that augments and collaborates with us instead of competing with (or replacing) us. He shares his vision for a future where AI helps us achieve superhuman performance in perception, creativity and cognitive function — from turbocharging our design skills to helping us remember everything we’ve ever read and the name of everyone we’ve ever met.
The video is short but gives a very clear idea of how Apple is thinking about AI and what the future applications could be.
From What an artificial intelligence researcher fears about AI
…as AI designs get even more complex and computer processors even faster, their skills will improve. That will lead us to give them more responsibility, even as the risk of unintended consequences rises. We know that “to err is human,” so it is likely impossible for us to create a truly safe system.
We could set up our virtual environments to give evolutionary advantages to machines that demonstrate kindness, honesty and empathy. This might be a way to ensure that we develop more obedient servants or trustworthy companions and fewer ruthless killer robots.
As researchers, and as a society, we have not yet come up with a clear idea of what we want AI to do or become. In part, of course, this is because we don’t yet know what it’s capable of.
Wonderful blog post. Artificial intelligence experts face scientific, legal, moral and ethical dilemmas like no other expert before in our history.
From AI Is Inventing Languages Humans Can’t Understand. Should We Stop It?
At first, they were speaking to each other in plain old English. But then researchers realized they’d made a mistake in programming.
“There was no reward to sticking to English language,” says Dhruv Batra, visiting research scientist from Georgia Tech at Facebook AI Research (FAIR). As these two agents competed to get the best deal–a very effective bit of AI vs. AI dogfighting researchers have dubbed a “generative adversarial network”–neither was offered any sort of incentive for speaking as a normal person would. So they began to diverge, eventually rearranging legible words into seemingly nonsensical sentences.
Should we allow AI to evolve its dialects for specific tasks that involve speaking to other AIs? To essentially gossip out of our earshot? Maybe; it offers us the possibility of a more interoperable world, a more perfect place where iPhones talk to refrigerators that talk to your car without a second thought.
The tradeoff is that we, as humanity, would have no clue what those machines were actually saying to one another.
What if artificial intelligence would help humans to develop a more efficient, universal language?
From IBM News room – 2017-07-21 IBM and University of Alberta Publish New Data on Machine Learning Algorithms to Help Predict Schizophrenia
In the paper, researchers analyzed de-identified brain functional Magnetic Resonance Imaging (fMRI) data from the open data set, Function Biomedical Informatics Research Network (fBIRN) for patients with schizophrenia and schizoaffective disorders, as well as a healthy control group. fMRI measures brain activity through blood flow changes in particular areas of the brain.
Specifically, the fBIRN data set reflects research done on brain networks at different levels of resolution, from data gathered while study participants conducted a common auditory test. Examining scans from 95 participants, researchers used machine learning techniques to develop a model of schizophrenia that identifies the connections in the brain most associated with the illness.
From Khosla Ventures leads $50 million investment in Vicarious’ AI tech | VentureBeat | Entrepreneur | by Bérénice Magistretti
The Union City, California-based startup is using computational neuroscience to build better machine learning models that help robots quickly address a wide variety of tasks. Vicarious focuses on the neocortex, a part of the brain concerned with sight and hearing.
“We aren’t trying to emulate the brain exactly,” wrote Vicarious cofounder and CEO Scott Phoenix, in an email to VentureBeat. “A good way to think about it is airplanes and birds. When building a plane, you want to borrow relevant features from birds, like low body weight and deformable wings, without getting into irrelevant details like feather colors and flapping.”
I think this quote is deeply inspired by the book Superintelligence by Nick Bostrom. Which is not surprising as Vicarous is trying to build the holy grail of AI: an artificial general intelligence.
They have the most impressive list of investors I have seen in a long time.
From Robust Adversarial Examples
We’ve created images that reliably fool neural network classifiers when viewed from varied scales and perspectives. This challenges a claim from last week that self-driving cars would be hard to trick maliciously since they capture images from multiple scales, angles, perspectives, and the like.
This innocuous kitten photo, printed on a standard color printer, fools the classifier into thinking it’s a monitor or desktop computer regardless of how its zoomed or rotated. We expect further parameter tuning would also remove any human-visible artifacts.
Watch the videos.
From Elon Musk says we need to regulate AI before it becomes a danger to humanity – The Verge
“I have exposure to the very cutting edge AI, and I think people should be really concerned about it,” Musk told attendees at the National Governors Association Summer Meeting on Saturday. “I keep sounding the alarm bell, but until people see robots going down the street killing people, they don’t know how to react, because it seems so ethereal.”
The solution, says Musk, is regulation: “AI is a rare case where we need to be proactive about regulation instead of reactive. Because I think by the time we are reactive in AI regulation, it’s too late.” He added that what he sees as the current model of regulation, in which governments step in only after “a whole bunch of bad things happen,” is inadequate for AI because the technology represents “a fundamental risk to the existence of civilization.”
He doesn’t hold words anymore. He must have seen something that truly terrified him.
The full video is here: https://www.youtube.com/watch?v=2C-A797y8dA
From Microsoft’s new iPhone app narrates the world for blind people – The Verge
With the app downloaded, the users can point their phone’s camera at a person and it’ll say who they are and how they’re feeling; they can point it at a product and it’ll tell them what it is. All using artificial intelligence that runs locally on their phone.
The app works in a number of scenarios. As well as recognizing people its seen before and guessing strangers’ age and emotion, it can identify household products by scanning barcodes. It also reads and scan documents, and recognizes US currency.
Imagine if this would be the key function of an earpiece like the Waverly Labs one.
From Bark.us saves teens’ lives by using AI to analyze their online activity | VentureBeat | Bots | by Khari Johnson
Bark.us uses machine learning and statistical analysis to crawl conversations teens have on email, SMS, and platforms like Snapchat, Instagram, and WhatsApp. Analysis is performed to determine if a kid is suffering from cyberbullying, suicidal thoughts, possible depression, hate speech, or other attacks that can happen online without a parent or guardian aware anything is happening. Sexting and drug usage are also flagged. When signs of alarm are recognized, Bark alerts parents via text or email, then suggests potential next steps.
This sounds more like controlling than saving lives, but it might be a first step in the right direction.
What if, rather than alerting parents, this technology would be integrated with a biohacking solution to improve how kids react to life challenges?
From Anna Patterson talks Gradient Ventures, Google’s new AI fund | TechCrunch
It’s been pretty obvious for a few months now, but Google has finally admitted that it’s running its own investment fund targeting machine intelligence startups. The fund will go by the name Gradient Ventures and provide capital, resources and education to AI-first startups.
Google isn’t disclosing the size of the fund, but the company told us that it’s being run directly off of Google’s balance sheet and will have the flexibility to follow on when it makes sense. This is in contrast to GV (formally Google Ventures) and Capital G, which operate as independent funds.
AI is the first technology in a long time posing a real threat to Google dominance. In other words, artificial intelligence is the best bet for a newcomer to become the next Google. No surprise Google wants to spot that newcomer as early as possible.
From PAIR: the People + AI Research Initiative
Today we’re announcing the People + AI Research initiative (PAIR) which brings together researchers across Google to study and redesign the ways people interact with AI systems. The goal of PAIR is to focus on the “human side” of AI: the relationship between users and technology, the new applications it enables, and how to make it broadly inclusive. The goal isn’t just to publish research; we’re also releasing open source tools for researchers and other experts to use.
From Ethics and Governance AI Fund funnels $7.6M to Harvard, MIT and independent research efforts | TechCrunch
A $27 million fund aimed at applying artificial intelligence to the public interest has announced the first targets for its beneficence: $7.6 million will be split unequally between MIT’s Media Lab, Harvard’s Berkman Klein Center and seven smaller research efforts around the world.
The Ethics and Governance of Artificial Intelligence Fund was created by Reid Hoffman, Pierre Omidyar and the Knight Foundation back in January; the intention was to ensure that “social scientists, ethicists, philosophers, faith leaders, economists, lawyers and policymakers” have a say in how AI is developed and deployed.
From Google Lens offers a snapshot of the future for augmented reality and AI | AndroidAuthority
At the recent I/0 2017, Google stated that we were at an inflexion point with vision. In other words, it’s now more possible than ever before for a computer to look at a scene and dig out the details and understand what’s going on. Hence: Google Lens.This improvement comes courtesy of machine learning, which allows companies like Google to acquire huge amounts of data and then create systems that utilize that data in useful ways. This is the same technology underlying voice assistants and even your recommendations on Spotify to a lesser extent.
From Wolfram Alpha Is Making It Extremely Easy for Students to Cheat | WIRED
Still, the prevailing notion that Wolfram|Alpha is a form of cheating doesn’t appear to be dissipating. Much of this comes down to what homework is. If the purpose of homework is build greater understanding of concepts as presented in class, Joyce is adamant that teachers should view Wolfram|Alpha as an asset. It’s not that Wolfram Alpha has helped students “‘get through’ a math class by doing their homework for them,” he says, “but that we helped them actually understand what they were doing” in the first place. Dixon believes that Wolfram|Alpha can build confidence in students who don’t see themselves as having mathematical minds. Homework isn’t really about learning to do a calculation, but rather about learning to find and understand an answer regardless of how the calculation is executed.
That’s the route down which education appears to be headed. Once upon a time, education was all about packing as much information as possible into a human brain. Information was limited and expensive, and the smartest people were effectively the deepest and most organized filing cabinets. Today, it’s the opposite.“The notion of education as a transfer of information from experts to novices—and asking the novices to repeat that information, regurgitate it on command as proof that they have learned it—is completely disconnected from the reality of 2017,” says David Helfand, a Professor of Astronomy at Columbia University.
- Will AI make humans smarter or dumber?
- How is this different from a surgeon using AI-powered AR goggles to perform surgery?
From The dangers of letting Big Tech control AI
AI certainly has many applications beyond the business needs of a few of black-hole tech platforms. We’ve reached an exciting time when emerging technologies are facilitating smarter, faster, and better processes at increasingly lower costs, which is opening up the playing field to smaller, leaner players. It will become more and more common to see five-person startups go up against the tech behemoths.
From The Machines Are Getting Ready to Play Doctor
The researchers partnered with iRhythm, a company that makes portable ECG devices. They collected 30,000 30-second clips from patients with different forms of arrhythmia. To assess the accuracy of their algorithm, the team compared its performance to that of five different cardiologists on 300 undiagnosed clips. They had a panel of three expert cardiologists provide a ground-truth judgment.
Deep learning involves feeding large quantities of data into a big simulated neural network, and fine-tuning its parameters until it accurately recognized problematic ECG signals. The approach has proven adept at identifying complex patterns in images and audio, and it has led to the development of better-than-human image-recognition and voice-recognition systems.
Eric Horvitz, managing director of Microsoft Research and both a medical doctor and an expert on machine learning, says others, including two different groups from MIT and the University of Michigan, are applying machine learning to the detection of heart arrhythmias.
From Anti AI AI — Wearable Artificial Intelligence – DT R&D
Near the end of 2017 we’ll be consuming content synthesised to mimic real people. Leaving us in a sea of disinformation powered by AI and machine learning. The media, giant tech corporations and citizens already struggle to discern fact from fiction. And as this technology is democratised it will be even more prevalent.
Preempting this we prototyped a device worn on the ear and connected to a neural net trained on real and synthetic voices called Anti AI AI. The device notifies the wearer when a synthetic voice is detected and cools the skin using a thermoelectric plate to alert the wearer the voice they are hearing was synthesised: by a cold, lifeless machine.