Smart glasses designed to help dyslexic people to read words

From These smart glasses convert words into voice for people who are visually impaired – The Verge

The Oton Glass are glasses with two tiny cameras and an earphone on the sides. Half of the lens is a mirror that reflects the user’s eye so that the inner-facing camera can track eye movements and blinks.Image: Oton GlassUsers will look at some text and blink to capture a photo of what’s in front of them, which gets transmitted to a dedicated Raspberry Pi cloud system, analyzed for text, and then converted into a voice that plays through the earpiece. If the system is unable to read those words, a remote worker would be available to troubleshoot.

The Oton was most recently a third-place runner-up for the James Dyson award in 2016:

There exist similar products in the world, but they are not currently commercialized yet. They require a breakthrough of technology and trial-and-error on how to deploy smart glasses. The originality of OTON GLASS consists of two aspects, technology and deployment. First, in the technology realm, startups such as Orcam Inc. and Hours Technology Inc. are currently developing smart glasses for blind people. They mainly develop powerful OCR for the English (Alphabet) using machine learning techniques. On the other hand, OTON GLASS focuses on Japanese character recognition as its unique aspect. OTON GLASS aims to solve the user’s problems by becoming a hybrid (human-to-computer) recognizer and not approaching the problem using OCR Technology. Secondly, in terms deployment, OTON GLASS is all in one that combines camera-to-glasses – meaning they look like normal glasses. This capture trigger based on human’s behavior is natural interaction for people.

China accounted for 48 % of the world’s total AI startup funding in 2017, surpassing the US

From China overtakes US in AI startup funding with a focus on facial recognition and chips – The Verge

The competition between China and the US in AI development is tricky to quantify. While we do have some hard numbers, even they are open to interpretation. The latest comes from technology analysts CB Insights, which reports that China has overtaken the US in the funding of AI startups. The country accounted for 48 percent of the world’s total AI startup funding in 2017, compared to 38 percent for the US.

It’s not a straightforward victory for China, however. In terms of the volume of individual deals, the country only accounts for 9 percent of the total, while the US leads in both the total number of AI startups and total funding overall. The bottom line is that China is ahead when it comes to the dollar value of AI startup funding, which CB Insights says shows the country is “aggressively executing a thoroughly-designed vision for AI.”

I know the guys at CB Insights. Pretty reliable research firm.

AI can predict a heart disease looking at eyes blood vessels with 70% accuracy

From Google’s new AI algorithm predicts heart disease by looking at your eyes – The Verge

Scientists from Google and its health-tech subsidiary Verily have discovered a new way to assess a person’s risk of heart disease using machine learning. By analyzing scans of the back of a patient’s eye, the company’s software is able to accurately deduce data, including an individual’s age, blood pressure, and whether or not they smoke. This can then be used to predict their risk of suffering a major cardiac event — such as a heart attack — with roughly the same accuracy as current leading methods.

and

To train the algorithm, Google and Verily’s scientists used machine learning to analyze a medical dataset of nearly 300,000 patients. This information included eye scans as well as general medical data. As with all deep learning analysis, neural networks were then used to mine this information for patterns, learning to associate telltale signs in the eye scans with the metrics needed to predict cardiovascular risk (e.g., age and blood pressure).

and

When presented with retinal images of two patients, one of whom suffered a cardiovascular event in the following five years, and one of whom did not, Google’s algorithm was able to tell which was which 70 percent of the time. This is only slightly worse than the commonly used SCORE method of predicting cardiovascular risk, which requires a blood test and makes correct predictions in the same test 72 percent of the time.

Now, if you equip a pair of smart glasses with a scanner, you are basically going around with an AI that looks around you and inside you. At the same time. What are the implications?

Self-healing and recyclable electronic skin 

From This electronic skin can heal itself — and then make more skin – The Verge

researchers have created an electronic skin that can be completely recycled. The e-skin can also heal itself if it’s torn apart.The device, described today in the journal Science Advances, is basically a thin film equipped with sensors that can measure pressure, temperature, humidity, and air flow. The film is made of three commercially available compounds mixed together in a matrix and laced with silver nanoparticles: when the e-skin is cut in two, adding the three compounds to the “wound” allows the e-skin to heal itself by recreating chemical bonds between the two sides. That way, the matrix is restored and the e-skin is as good as new. If the e-skin is broken beyond repair, it can just be soaked in a solution that “liquefies” it so that the materials can be reused to make new e-skin. One day, this electronic skin could be used in prosthetics, robots, or smart textiles.

Nanorobots have potential as intelligent drug delivery systems

From New DNA nanorobots successfully target and kill off cancerous tumors | TechCrunch

Using tumor-bearing mouse models, we demonstrate that intravenously injected DNA nanorobots deliver thrombin specifically to tumor-associated blood vessels and induce intravascular thrombosis, resulting in tumor necrosis and inhibition of tumor growth

and

DNA nanorobots are a somewhat new concept for drug delivery. They work by getting programmed DNA to fold into itself like origami and then deploying it like a tiny machine, ready for action.

The [chinese] scientists behind this study tested the delivery bots by injecting them into mice with human breast cancer tumors. Within 48 hours, the bots had successfully grabbed onto vascular cells at the tumor sites, causing blood clots in the tumor’s vessels and cutting off their blood supply, leading to their death.

Remarkably, the bots did not cause clotting in other parts of the body, just the cancerous cells they’d been programmed to target

CRISPR pioneers now use it to detect infections like HPV, dengue, and Zika

From New CRISPR tools can detect infections like HPV, dengue, and Zika – The Verge

The new tools, developed by the labs of CRISPR pioneers Jennifer Doudna and Feng Zhang, are showcased in two studies published today in the journal Science. In one paper, Doudna’s team describes a system called DETECTR, which can accurately identify different types of the HPV virus in human samples. In the second paper, Zhang’s team shows an upgraded version of SHERLOCK — which was shown last year to detect viruses like Zika and dengue, as well as other harmful bacteria — in human samples.

and

The CRISPR used in the first Science study is called CRISPR-Cas12a. Doudna’s team discovered that when this type of CRISPR snips double-stranded DNA, it does something interesting: it starts shredding single-stranded DNA as well

the CRISPR system is programmed to detect the HPV DNA inside a person’s cells. When CRISPR detects it, it also cuts a “reporter molecule” with single-stranded DNA that releases a fluorescent signal. So if the cells are infected with HPV, scientists are able to see the signal and quickly diagnose a patient. For now, DETECTR was tested in a tube containing DNA from infected human cells, showing it could detect HPV16 with 100 percent accuracy, and HPV18 with 92 percent accuracy.

and

Called SHERLOCK, this system uses a variety of CRISPR enzymes, including Cas12a. Last year, Zhang’s team showed that SHERLOCK uses CRISPR-Cas13a to find the genetic sequence of Zika, dengue, and several other bacteria, as well as the sequences associated with a cancer mutation in a variety of human samples, such as saliva. Now, the team has improved the tool to be 100 times more sensitive and detect multiple viruses — such as Zika and dengue — in one sample simultaneously. It does this by combining different types of CRISPR enzymes, which are unleashed together to target distinct bits of DNA and RNA, another of the major biological molecules found in all forms of life. Some enzymes also work together to make the tool more sensitive.

If you read Doudna’s book, featured in the H+ “Key Books” section, you realise the enormous progress we made in the last 10 years in terms of DNA manipulation thanks to CRISPR, and yet you have a clear understanding that we are just scratching the surface of what is possible.

Sequence your genome for less than $1,000 and sell it via blockchain

From Human sequencing pioneer George Church wants to give you the power to sell your DNA on the blockchain | TechCrunch

MIT professor and godfather of the Human Genome Project George Church wants to put your genes on it.

His new startup Nebula Genomics plans to sequence your genome for less than $1,000 (the current going rate of whole genome sequencing) and then add your data to the blockchain through the purchase of a “Nebula Token.”

Church and his colleagues laid out in a recently released white paper that this will put the genomic power in the hands of the consumer, as opposed to companies like 23andMe and AncestryDNA, which own your genomic data after you take that spit tube test.

These companies sell that data in large swaths to pharmaceutical and research companies, often for millions of dollars. However, using the blockchain, consumers can choose to sell their own data directly.

and

Those buying up tokens and sequencing their DNA through Nebula don’t have to sell it for money, of course, and Nebula says they can still discover insights about their own genetics through the company app without sharing it elsewhere, if they desire.

However, all bought and sold data will be recorded on the blockchain, which is a technology allowing for the recording of all transactions using a key code known only to the person who holds the information.

Two thoughts:

  • If this idea generates even a tiny bit of money for each individual involved, it might unlock unprecedented access to genetic information for advanced engineering.
  • Our genome is the second last thing we’ve left to sell. The last one is our attention. But once they have our genome, our attention may come for free.

A biohacker injected himself with a DIY herpes treatment in front of a conference audience

From A biohacker injected himself with a DIY herpes treatment in front of a live audience – The Verge

Aaron Traywick, 28, who leads biotech firm Ascendance Biomedical, used an experimental herpes treatment that did not go through the typical route of clinical trials to test its safety.

Instead of being developed by research scientists in laboratories, it was created by a biohacker named Andreas Stuermer, who “holds a masters degree and is a bioentrepreneur and science lover,” according to a conference bio. This is typical of the Ascendance approach. The company believes that FDA regulations for developing treatments are too slow and that having biohackers do the research and experiment on themselves can speed up the process to everyone’s benefit. In the past, the company’s plans have included trying to reverse menopause, a method that is now actually in clinical trials.

From Biohackers Disregard FDA Warning on DIY Gene Therapy – MIT Technology Review

Experts say any gene therapy prepared by amateurs would probably not be potent enough to have much effect, but it could create risks such as an immune reaction to the foreign DNA. “I think warning people about this is the right thing,” says David Gortler, a drug safety expert with the consulting group Former FDA. “The bottom line is, this hasn’t been tested.”

The problem facing regulators is that interest in biohacking is spreading, and it’s increasingly easy for anyone to obtain DNA over the internet.

The last sentence is key. As in the tech industry, once you trigger bottom-up adoption the process is irreversible. And disruptive.

Police in China have begun using sunglasses equipped with facial recognition technology

From Chinese police spot suspects with surveillance sunglasses – BBC News

The glasses are connected to an internal database of suspects, meaning officers can quickly scan crowds while looking for fugitives.

The sunglasses have already helped police capture seven suspects, according to Chinese state media.

The seven people who were apprehended are accused of crimes ranging from hit-and-runs to human trafficking.

and

The technology allows police officers to take a photograph of a suspicious individual and then compare it to pictures stored in an internal database. If there is a match, information such as the person’s name and address will then be sent to the officer.

An estimated 170 million CCTV cameras are already in place and some 400 million new ones are expected be installed in the next three years.

Many of the cameras use artificial intelligence, including facial recognition technology.

In December 2017, I published Our Machines Can Very Easily Recognise You Among At Least 2 Billion People in a Matter of Seconds. It didn’t take long to go from press claims to real-world implementation.

Human augmentation 2.0 is already here, just not evenly distributed.

MIT launches Intelligence Quest, an initiative to discover the foundations of human intelligence

From Institute launches the MIT Intelligence Quest | MIT News

At a time of rapid advances in intelligence research across many disciplines, the Intelligence Quest — MIT IQ — will encourage researchers to investigate the societal implications of their work as they pursue hard problems lying beyond the current horizon of what is known.

Some of these advances may be foundational in nature, involving new insight into human intelligence, and new methods to allow machines to learn effectively. Others may be practical tools for use in a wide array of research endeavors, such as disease diagnosis, drug discovery, materials and manufacturing design, automated systems, synthetic biology, and finance.

and

MIT is poised to lead this work through two linked entities within MIT IQ. One of them, “The Core,” will advance the science and engineering of both human and machine intelligence. A key output of this work will be machine-learning algorithms. At the same time, MIT IQ seeks to advance our understanding of human intelligence by using insights from computer science.

The second entity, “The Bridge” will be dedicated to the application of MIT discoveries in natural and artificial intelligence to all disciplines, and it will host state-of-the-art tools from industry and research labs worldwide.

The Bridge will provide a variety of assets to the MIT community, including intelligence technologies, platforms, and infrastructure; education for students, faculty, and staff about AI tools; rich and unique data sets; technical support; and specialized hardware

and

In order to power MIT IQ and achieve results that are consistent with its ambitions, the Institute will raise financial support through corporate sponsorship and philanthropic giving.

MIT IQ will build on the model that was established with the MIT–IBM Watson AI Lab

What a phenomenal initiative. And MIT is one of the top places in the world to be for AI research.

Artificial General Intelligence might come out of this project.

Ultimately we want a (neuromorphic) chip as big as a fingernail to replace one big (AI) supercomputer

From Engineers design artificial synapse for “brain-on-a-chip” hardware | MIT News

Researchers in the emerging field of “neuromorphic computing” have attempted to design computer chips that work like the human brain. Instead of carrying out computations based on binary, on/off signaling, like digital chips do today, the elements of a “brain on a chip” would work in an analog fashion, exchanging a gradient of signals, or “weights,” much like neurons that activate in various ways depending on the type and number of ions that flow across a synapse.

In this way, small neuromorphic chips could, like the brain, efficiently process millions of streams of parallel computations that are currently only possible with large banks of supercomputers. But one significant hangup on the way to such portable artificial intelligence has been the neural synapse, which has been particularly tricky to reproduce in hardware.

Now engineers at MIT have designed an artificial synapse in such a way that they can precisely control the strength of an electric current flowing across it, similar to the way ions flow between neurons. The team has built a small chip with artificial synapses, made from silicon germanium. In simulations, the researchers found that the chip and its synapses could be used to recognize samples of handwriting, with 95 percent accuracy

and

Instead of using amorphous materials as an artificial synapse, Kim and his colleagues looked to single-crystalline silicon, a defect-free conducting material made from atoms arranged in a continuously ordered alignment. The team sought to create a precise, one-dimensional line defect, or dislocation, through the silicon, through which ions could predictably flow.

The researchers fabricated a neuromorphic chip consisting of artificial synapses made from silicon germanium, each synapse measuring about 25 nanometers across. They applied voltage to each synapse and found that all synapses exhibited more or less the same current, or flow of ions, with about a 4 percent variation between synapses — a much more uniform performance compared with synapses made from amorphous material.

They also tested a single synapse over multiple trials, applying the same voltage over 700 cycles, and found the synapse exhibited the same current, with just 1 percent variation from cycle to cycle.

Commercialization is very far away from this, but what we are talking here is building the foundation for artificial general intelligence (AGI), and before that, for narrow AI that can be embedded in clothes and everyday objects, not just in smartphones and other electronic devices.

Imagine the possibilities if an AI chip would be as cheap, small and ubiquitous as Bluetooth chips are today.

Undetectable mass social influence powered by artificial intelligence is an existential threat to humanity

Julian Assange on Twitter

The future of humanity is the struggle between humans that control machines and machines that control humans.
While the internet has brought about a revolution in our ability to educate each other, the consequent democratic explosion has shaken existing establishments to their core. Burgeoning digital super states such as Google, Facebook and their Chinese equivalents, who are integrated with the existing order, have moved to re-establish discourse control. This is not simply a corrective action. Undetectable mass social influence powered by artificial intelligence is an existential threat to humanity.
While still in its infancy, the geometric nature of this trend is clear. The phenomenon differs from traditional attempts to shape culture and politics by operating at a scale, speed, and increasingly at a subtlety, that appears highly likely to eclipse human counter-measures.
Nuclear war, climate change or global pandemics are existential threats that we can work through with discussion and thought. Discourse is humanity’s immune system for existential threats. Diseases that infect the immune system are usually fatal. In this case, at a planetary scale.

Self-doubting AI vs certain AI

From Google and Others Are Building AI Systems That Doubt Themselves – MIT Technology Review

Researchers at Uber and Google are working on modifications to the two most popular deep-learning frameworks that will enable them to handle probability. This will provide a way for the smartest AI programs to measure their confidence in a prediction or a decision—essentially, to know when they should doubt themselves.

and

“You would like a system that gives you a measure of how certain it is,” says Dustin Tran, who is working on this problem at Google. “If a self-driving car doesn’t know its level of uncertainty, it can make a fatal error, and that can be catastrophic.”

and

Goodman explains that giving deep learning the ability to handle probability can make it smarter in several ways. It could, for instance, help a program recognize things, with a reasonable degree of certainty, from just a few examples rather than many thousands. Offering a measure of certainty rather than a yes-or-no answer should also help with engineering complex systems.

Improving brain-computer interfaces by decrypting neural patterns

From Cracking the Brain’s Enigma Code – Scientific American

Many human movements, such as walking or reaching, follow predictable patterns, too. Limb position, speed and several other movement features tend to play out in an orderly way. With this regularity in mind, Eva Dyer, a neuroscientist at the Georgia Institute of Technology, decided to try a cryptography-inspired strategy for neural decoding.

Existing brain-computer interfaces typically use so-called ‘supervised decoders.’ These algorithms rely on detailed moment-by-moment movement information such as limb position and speed, which is collected simultaneously with recorded neural activity. Gathering these data can be a time-consuming, laborious process. This information is then used to train the decoder to translate neural patterns into their corresponding movements. (In cryptography terms, this would be like comparing a number of already decrypted messages to their encrypted versions to reverse-engineer the key.)

By contrast, Dyer’s team sought to predict movements using only the encrypted messages (the neural activity), and a general understanding of the patterns that pop up in certain movements.

and

Her team trained three macaque monkeys to either reach their arm or bend their wrist to guide a cursor to a number of targets arranged about a central point. At the same time, the researchers used implanted electrode arrays to record the activity of about 100 neurons in each monkey’s motor cortex, a key brain region that controls movement.

To find their decoding algorithm, the researchers performed an analysis on the neural activity to extract and pare down its core mathematical structure. Then they tested a slew of computational models to find the one that most closely aligned the neural patterns to the movement patterns.

and

Because Dyer’s decoder only required general statistics about movements, which tend to be similar across animals or across people, the researchers were also able to use movement patterns from one monkey to decipher reaches from the neural data of another monkey—something that is not feasible with traditional supervised decoders.

“There are people alive today who will live for 1,000 years”

From Aubrey de Grey: scientist who says humans can live for 1,000 years

Most approaches aimed at combating ageing focus on arresting the harmful byproducts of metabolism, he says. These cause cellular damage and decay, which, in turn, accumulate to trigger the age-related disorders, such as cancer or dementia, that tend to finish us off.

For de Grey, this strategy turns anti-ageing treatment into an impossible game of Whac-A-Mole. Because we understand metabolism so poorly, our efforts to interfere with it remain crude and the process of decay races through the body far quicker than treatments to avert it can keep up.

Instead of stopping the damage, the approach that de Grey has developed at his research centre — Strategies for Engineered Negligible Senescence (SENS), a public charity that he co-founded in 2009 — focuses on repair. This “engineering” approach is designed to keep the process of degradation below the threshold at which it turns into life-threatening disease. “If you can repair the microscopic damage then you are sidestepping the bigger problem [of prevention]”.

Assuming for a moment that some people alive today will be able to extend their lifespan to 200 years, or even 1,000 years, what would they do with such an enormity of time?

Today humans don’t really have a “life strategy”. They just live, allocating their lifetime to various activities according to what society has established. But what happens when your time extends well beyond the expectations of your society?

You may want to watch For de Grey’s TED Talk, too: A roadmap to end aging

Infusions of blood plasma from young donors to rejuvenate the body

From Exclusive: Inside the clinic offering young blood to cure ageing | New Scientist

So it’s a bit odd that this is the epicentre of a phenomenon rocking Silicon Valley: young blood treatments. JR is one of about 100 people who have each paid $8000 to join a controversial trial, offering them infusions of blood plasma from donors aged between 16 and 25 in a bid to turn back the clock. Participants have come from much further afield, including Russia and Australia.

and

in 2014, a team led by Tony Wyss-Coray, a neuroscientist at Stanford University, injected middle-aged mice with plasma from young mice. Sure enough, after three weeks they had anatomical improvements in the brain and a cognitive boost, compared with mice given a placebo.

The plasma didn’t even need to come from the same species – old mice became just as sprightly when the injection came from young humans. “We saw these astounding effects,” Wyss-Coray told New Scientist in 2014. “The human blood had beneficial effects on every organ we’ve studied so far.”

and

Ambrosia is a start-up headquartered in Washington DC. The trial didn’t need regulatory approval because plasma is already a standard treatment to replace missing proteins in people with rare genetic diseases. And there’s no placebo arm to it. All you need to join is a date of birth that makes you over 35 – and a spare $8000.

For your money, you are infused with 2 litres of plasma left over from young people who have donated to blood centres (see “Blood myths”). Unlike the trials looking at young blood’s effects on specific diseases, Ambrosia has a softer target: the general malaise of being old. In addition to measuring changes in about 100 biomarkers in blood, the firm is also “looking for general improvements”, says Jesse Karmazin, who runs the start-up.

The methodology falls short of the normal standards of scientific rigour, so it’s unsurprising that scientists and ethicists have accused Karmazin’s team of taking advantage of public excitement around the idea.

The numbers were as unverifiable as they were impressive: one month after treatment, 70 participants saw reductions in blood factors associated with risk of cancer, Alzheimer’s disease and heart disease, and reductions in cholesterol were on par with those from statin therapy.

and

Risks commonly associated with plasma transfusion include transfusion-related acute lung injury, which is fatal; transfusion-associated circulatory overload; and allergic reactions. Rare complications include catching an infectious disease: blood products carry a greater than 1 in a million chance of HIV transmission. That’s too risky for JR, who tells me that before every treatment he takes a dose of the HIV prophylactic PrEP.

and

There could be risks of developing autoimmune disorders. And some fear that pumping stimulating proteins into people for years could lead to cancer. “If you keep infusing blood, the risk of reactions goes up,” says Dobri Kiprov, an immunologist at California Pacific Medical Center in San Francisco. “Many of these people are just eager to get younger – they don’t have a particular disease, so it’s not justified.”

It sounds dangerous and unproven, but there are multiple high profile startups researching this road:

Google’s life-extension biotech arm Calico, among others, she developed an experiment in which a pump ferried half the blood from one individual into another.

and

anti-ageing start-up Unity Biotechnology, which is backed by Amazon founder Jeff Bezos’s investment company. They are developing a blood-exchange device, a kind of dialysis machine for old age, which cycles your blood through a filter that washes a laundry list of harmful compounds out of the plasma before returning it to you. This would carry no immune effects or disease risks, because it’s your own blood. No regulatory approval is needed, because dialysis filters that remove proteins from plasma are already in use, for example to remove cholesterol in people with certain hereditary diseases.

They are also developing sensors to notify you when levels of bad biomarkers are getting too high – a decrepitude meter to tell you when it’s time for a decrepitude wash.

You may want to watch Tony Wyss-Coray TED Talk, too: How young blood might help reverse aging. Yes, really

The US Air Force has a division dedicated to human performance enhancement

From 711th Human Performance Wing

The 711th Human Performance Wing (711 HPW), headquartered at Wright-Patterson Air Force Base in Ohio, is the first human-centric warfare wing to consolidate human performance research, education and consultation under a single organization. Established under the Air Force Research Laboratory, the 711 HPW is comprised of the Airman Systems Directorate (RH), the United States Air Force School of Aerospace Medicine (USAFSAM) and the Human Systems Integration Directorate (HP). The Wing delivers unparalleled capability to the Air Force through a combination of world class infrastructure and expertise of its diverse workforce of military, civilian and contractor personnel encompassing 75 occupational specialties including science and engineering, occupational health and safety, medical professions, technicians, educators, and business operations and support.

VISION
To be a world leader for human performance.

MISSION
To advance human performance in air, space, and cyberspace through research, education, and consultation. The Wing supports the most critical Air Force resource – the Airman of our operational military forces. The Wing’s primary focus areas are aerospace medicine, Human Effectiveness Science and Technology, and Human Systems Integration. In conjunction with the Naval Medical Research Unit – Dayton and surrounding universities and medical institutions, the 711 HPW functions as a Joint Department of Defense Center of Excellence for human performance sustainment and readiness, optimization, readiness.

Notice the inclusion of “cyberspace” among the environments where they try to advance human performance.

Smart diapers for the elderly – when smart monitoring is too much monitoring?

From Pixie Scientific announces availability for purchase in the UK of Pixie Pads, the first adult

Pixie Pads will help incontinent adults, including Alzheimer’s and other dementia sufferers, for whom behavioral symptoms of UTI are often confused with progression of dementia. Patients suffering the effects of stroke, spinal cord injury, or developmental disabilities, and men recovering from radical prostatectomy will also benefit from continuous monitoring enabled by Pixie Pads.

and

Disposable Pixie Pads contain an indicator panel that is scanned by a caregiver using the mobile Pixie App at changing time. The app stores urinalysis data in a secure online service for review and long-term monitoring. It issues an alert to a professional caregiver if there are signs of an infection that require further attention.

This was happening in mid 2016. One year later, Pixie Scientific got the FDA approval to sell in the US as well and started shipping the pads.

Notice that the company initially targeted a completely different market, newborn children, but I guess it wasn’t received too well. While monitoring the body can help diagnose and cure illnesses early on, it’s a big cultural shift from the state of “blindness” we are used to. Too much monitoring can create a state of anxiety and hyper-reaction to any exception to the baseline, not just legit symptoms.

CRISPR might be employed to destroy entire species

From A Crack in Creation:

Ironically, CRISPR might also enable the opposite: forcible extinction of unwanted animals or pathogens. Yes, someday soon, CRISPR might be employed to destroy entire species—an application I never could have imagined when my lab first entered the fledgling field of bacterial adaptive immune systems just ten years ago. Some of the efforts in these and other areas of the natural world have tremendous potential for improving human health and well-being. Others are frivolous, whimsical, or even downright dangerous. And I have become increasingly aware of the need to understand the risks of gene editing, especially in light of its accelerating use. CRISPR gives us the power to radically and irreversibly alter the biosphere that we inhabit by providing a way to rewrite the very molecules of life any way we wish. At the moment, I don’t think there is nearly enough discussion of the possibilities it presents—for good, but also for ill.

We have a responsibility to consider the ramifications in advance and to engage in a global, public, and inclusive conversation about how to best harness gene editing in the natural world, before it’s too late.

and

If the first of these gene drives (for pigmentation) seems benign and the second (for malaria resistance) seems beneficial, consider a third example. Working independently of the California scientists, a British team of researchers—among them Austin Bud, the biologist who pioneered the gene drive concept—created highly transmissive CRISPR gene drives that spread genes for female sterility. Since the sterility trait was recessive, the genes would rapidly spread through the population, increasing in frequency until enough females acquired two copies, at which point the population would suddenly crash. Instead of eradicating malaria by genetically altering mosquitoes to prevent them from carrying the disease, this strategy presented a blunter instrument—one that would cull entire populations by hindering reproduction. If sustained in wild-mosquito populations, it could eventually lead to outright extermination of an entire mosquito species.

and

It’s been estimated that, had a fruit fly escaped the San Diego lab during the first gene drive experiments, it would have spread genes encoding CRISPR, along with yellow-body trait, to between 20 and 50 percent of all fruit flies worldwide.

The author of this book, Jennifer Doudna, is one of the first scientists that discovered the groundbreaking gene editing technique CRISPR-Cas9. The book is a fascinating narration of how CRISPR came to be, and it’s listed in the Key Books section of H+.

The book was finished in September 2016 (and published in June 2017), so the warning is quite recent.

You may also want to watch Doudna’s TED Talk about the bioethics of CRISPR: How CRISPR lets us edit our DNA.

Using Artificial Intelligence to augment human intelligence

From Using Artificial Intelligence to Augment Human Intelligence

in one common view of AI our computers will continue to get better at solving problems, but human beings will remain largely unchanged. In a second common view, human beings will be modified at the hardware level, perhaps directly through neural interfaces, or indirectly through whole brain emulation.

We’ve described a third view, in which AIs actually change humanity, helping us invent new cognitive technologies, which expand the range of human thought. Perhaps one day those cognitive technologies will, in turn, speed up the development of AI, in a virtuous feedback cycle

and

The interface-oriented work we’ve discussed is outside the narrative used to judge most existing work in artificial intelligence. It doesn’t involve beating some benchmark for a classification or regression problem. It doesn’t involve impressive feats like beating human champions at games such as Go. Rather, it involves a much more subjective and difficult-to-measure criterion: is it helping humans think and create in new ways?

This creates difficulties for doing this kind of work, particularly in a research setting. Where should one publish? What community does one belong to? What standards should be applied to judge such work? What distinguishes good work from bad?

A truly remarkable idea that would be infinitely more powerful if not buried under a wall of complexity, making it out of reach for very many readers.

This could be a seminal paper.

UK company pioneers tissue engineering with 3D bioprinters 

From Applications | 3Dynamic Systems Ltd

3Dynamic Systems is currently developing a range of 3D bioprinted vascular scaffold as part of its new product line. We have been developing 3D bioprinting as a research tool since 2012 and have now pushed forward with the commercialisation of the first 3D tissue structures. Called the vascular scaffold, it is the first commercial tissue product to be developed by us. 3DS research has accelerated recently and work is now focussing on the fabrication of heterogeneous tissues for use in surgery.

Currently we manufacture 20mm length sections of bioprinted vessels, which if successful will lead to larger and more complex vessels to be bioprinted in 3D. Our research concentrates on using the natural self-organising properties of cells in order to produce functional tissues.

At 3DS, we have a long-term goal that this technology will one day be suitable for surgical therapy and transplantation. Blood vessels are made up of different cell types and our new Omega allows for many types of cells to be deposited in 3D. Biopsied tissue materials is gathered from a host, with stem cells isolated and multiplied. These cells are cultured and placed in a bioreactor, which provides oxygen and other nutrients to keep them alive. The millions of cells that are produced are then added to our bioink and bioprinted into the correct 3D geometry.

Over the next two years we will begin the long road towards the commercialisation of our 3D bioprinted vessels. Further development of their technology will harness tissues for operative repair and in the short-term tissues for pharmaceutical trials. This next step in the development of this process could one day transform the field of reconstructive medicine which may lead to direct bioengineering replacement human tissues on-demand for transplantation.

The next opportunity for our research is in developing organ on a chip technology to test drugs and treatments. So far we have initial data based on our vascular structures. In the future this method may be used to analyse any side-effects of new pharmaceutical products.

3Dynamic Systems building 3D bioprinters that automatically produce 3D tissue structures. The company also build perfusion bioreactors that test tissue structures over periods of months for the effects of stimulation and the test the influence of drugs on 3D cell behaviour.

Normally, I don’t quote the website of companies working in the field of research and commercial application covered by H+. But these guys followed @hplus on Twitter without asking for any coverage and have a crystal clear website. I wish more companies were like this.

Ford testing exoskeletons for workers in final assembly plant

From Are exoskeletons the future of physical labor? – The Verge

The vest that Paul Collins has been wearing at Ford is made by Ekso Bionics, a Richmond, California-based company. It’s an electronic-free contraption, and the soft part that hugs his chest looks like the front of a backpack. But the back of it has a metal rod for a spine, and a small, curved pillow rests behind his neck. Extending from the spine are spring-loaded arm mechanics, ones that help Collins lift his arms to install carbon cans on Ford C-Max cars, and rubber grommets on Ford Focuses — about 70 cars an hour.

and

since 2011, Ford has been working, in some capacity, on wearable robotics solutions. But rather than trying to develop something that would give workers superhuman strength, the idea is to prevent injury. “In 2016, our injury statistics were the lowest we’ve seen on record. We’ve had an 83 percent decrease in some of these metrics over the past five years, which is all great,” Smets said. “But if you look at the body parts that are still getting injured, it’s predominantly the shoulder. That’s our number one joint for injury. It’s also the longest to return to full functionality, and the most costly.”

The Ekso vest I tried costs around $6,500 and weighs nine pounds. Smets handed me a power tool, flipped a physical switch on the arm of the vest, and told me to raise my arms over my head as though I was on an assembly line. At some point during my movement, the exosuit kicked into action, its spring mechanism lifting my arms the rest of the way. I could leave my arms in place above my head, too, fully supported. My fingers started to tingle after awhile in that position.

Watch the video.

Tomorrow’s replacement skin could be 3D printed from a new ink embedded with living bacteria

From This 3D-printed ‘living ink’ could someday help with skin replacements – The Verge

Bacteria are able to do everything from breaking down toxins to synthesizing vitamins. When they move, they create strands of a material called cellulose that is useful for wound patches and other medical applications. Until now, bacterial cellulose could only be grown on a flat surface — and few parts of our body are perfectly flat. In a paper published today in Science Advances, researchers created a special ink that contains these living bacteria. Because it is an ink, it can be used to 3D print in shapes — including a T-shirt, a face, and circles — and not just flat sheets.

Bacterial cellulose is free of debris, holds a lot of water, and has a soothing effect once it’s applied on wounds. Because it’s a natural material, our body is unlikely to reject it, so it has many potential applications for creating skin transplants, biosensors, or tissue envelopes to carry and protect organs before transplanting them.

The amount of research on skin synthesis and augmentation is surprising. H+ is capturing a lot of articles about it.

“We have entered the age where the human genome is a real drug target” – CRISPR stopped mice from going deaf

From This gene therapy stopped mice from going deaf — and could save some humans’ hearing too – The Verge

Although people can lose their hearing for a variety of reasons — old age, as well as exposure to loud noises — genetics are behind a little less than half of all deafness cases, says study co-author David Liu, a professor of chemistry and chemical biology at Harvard, who also has affiliations with the Broad Institute and the Howard Hughes Medical Institute. The hearing-loss disease tackled in this study is caused by mutations in a gene called TMC1. These mutations cause the death of so-called hair cells in the inner ear, which convert mechanical vibrations like sound waves into nerve signals that the brain interprets as hearing. As a result, people start losing their hearing in their childhood or in the 20s, and can go completely deaf by their 50s and 60s.

To snip those mutant copies of the gene, Liu and his colleagues mixed CRISPR-Cas9 with a lipid droplet that allows the gene-editing tool to enter the hair cells and get to work. When the concoction was injected into one ear of newborn mice with the disease, the molecular scissors were able to precisely cut the deafness-causing copy of the gene while leaving the healthy copy alone, even if the two copies differ by just one base pair. The treatment allowed the hair cells to stay healthier and prevented the mice from going deaf.

After four weeks, the untreated ears could only pick up noises that were 80 decibels or louder, roughly as loud as a garbage disposal, Liu says. Instead, the injected ears could typically hear sounds in the 60 to 65 decibel range, which is the same as a quiet conversation. “If one can translate that 15 decibel improvement in hearing sensitivity in humans, it would actually make a potential difference in the quality of their hearing capability,” Liu tells The Verge.

Limb reanimation through neuroscience and machine learning

From First paralysed person to be ‘reanimated’ offers neuroscience insights : Nature

A quadriplegic man who has become the first person to be implanted with technology that sends signals from the brain to muscles — allowing him to regain some movement in his right arm hand and wrist — is providing novel insights about how the brain reacts to injury.

Two years ago, 24-year-old Ian Burkhart from Dublin, Ohio, had a microchip implanted in his brain, which facilitates the ‘reanimation’ of his right hand, wrist and fingers when he is wired up to equipment in the laboratory.

and

Bouton and his colleagues took fMRI (functional magnetic resonance imaging) scans of Burkhart’s brain while he tried to mirror videos of hand movements. This identified a precise area of the motor cortex — the area of the brain that controls movement — linked to these movements. Surgery was then performed to implant a flexible chip that detects the pattern of electrical activity arising when Burkhart thinks about moving his hand, and relays it through a cable to a computer. Machine-learning algorithms then translate the signal into electrical messages, which are transmitted to a flexible sleeve that wraps around Burkhart’s right forearm and stimulates his muscles.

Burkhart is currently able to make isolated finger movements and perform six different wrist and hand motions, enabling him to, among other things, pick up a glass of water, and even play a guitar-based video game.

This story is one year and a half old, but I just found out about it and I think it’s a critical piece of the big picture that H+ is trying to narrate.

A growing number of artificial intelligence researchers focus on algorithmic bias

Kate Crawford, Distinguished Research Professor at New York University, a Principal Researcher at Microsoft Research New York, and a Visiting Professor at the MIT Media Lab, presented The Trouble with Bias at the NIPS 2017, the most influential and attended (over 8,000 people) conference on artificial intelligence.

Prof. Crawford is not the only one looking into algorithmic bias. As she shows in her presentation, a growing number of research papers focus on it, and even government agencies have started questioning how AI decisions are made.

Why do I talk about algorithmic bias so frequently on H+? Because in a future were AI augments human brain capabilities, through neural interfaces or other means, the algorithmic bias would manipulate people’s worldview in ways that mass media and politics can’t even dream about.

Before we merge human biology with technology we need to ask really difficult questions about how technology operates outside the body.

A task force to review New York City agencies’ use of algorithms and their bias

From New York City Takes on Algorithmic Discrimination | American Civil Liberties Union

The New York City Council yesterday passed legislation that we are hopeful will move us toward addressing these problems. New York City already uses algorithms to help with a broad range of tasks: deciding who stays in and who gets out of jail, teacher evaluations, firefighting, identifying serious pregnancy complications, and much more. The NYPD also previously used an algorithm-fueled software program developed by Palantir Technologies that takes arrest records, license-plate scans, and other data, and then graphs that data to supposedly help reveal connections between people and even crimes. The department since developed its own software to perform a similar task.

The bill, which is expected to be signed by Mayor Bill de Blasio, will provide a greater understanding of how the city’s agencies use algorithms to deliver services while increasing transparency around them. This bill is the first in the nation to acknowledge the need for transparency when governments use algorithms and to consider how to assess whether their use results in biased outcomes and how negative impacts can be remedied.

The legislation will create a task force to review New York City agencies’ use of algorithms and the policy issues they implicate. The task force will be made up of experts on transparency, fairness, and staff from non-profits that work with people most likely to be harmed by flawed algorithms. It will develop a set of recommendations addressing when and how algorithms should be made public, how to assess whether they are biased, and the impact of such bias.

Timely, as more and more AI researchers look into algorithmic bias.

Importance of Artificial Intelligence to Department of Defense

From Perspectives on Research in Artificial Intelligence and Artificial General Intelligence Relevant to DoD:

That AI and—if it were to advance significantly—AGI are of importance to DoD is so self-evident that it needs little elucidation here. Weapons systems and platforms with varying degrees of autonomy exist today in all domains of modern warfare, including air, sea (surface and underwater), and ground.

To cite a few out of many possible examples: Northrop Grumman’s X-47B is a strike fighter-sized unmanned aircraft, part of the U.S. Navy’s Unmanned Combat Air System (UCAS) Carrier Demonstration program. Currently undergoing flight testing, it is capable of aircraft carrier launch and recovery, as well as autonomous aerial refueling.4 DARPA’s Anti-Submarine Warfare Continuous Trail Unmanned Vessel (ACTUV) program recently commissioned the “Sea Hunter”, a 130 ft. unmanned trimaran optimized to robustly track quiet diesel electric submarines.
The Samsung SGR-A1 is a South Korean military robot sentry designed to replace human counterparts in the Korean demilitarized zone.
It is capable of challenging humans for a spoken password and, if it does not recognize the correct password in response, shooting them with either rubber bullets or lethal ammunition.

It is an important point that, while these systems have some degree of “autonomy” relying on the technologies of AI, they are in no sense a step—not even a small step—towards “autonomy” in the sense of AGI, that is, the ability to set independent goals or intent. Indeed, the word “autonomy” conflates two quite different meanings, one relating to “freedom of will or action” (like humans, or as in AGI), and the other the much more prosaic ability to act in accordance with a possibly complex rule set based on possibly complex sensor input, as in the word “automatic”. In using a terminology like “autonomous weapons”, the DoD may, as an unintended consequence, enhance the public’s confusion on this point.

and

At a higher strategic level, AI is recognized by DoD as a key enabling technology in a possible Third Offset Strategy.

As briefed to JASON, key elements of a Third Offset Strategy include:
(i) autonomous learning systems, e.g., in applications that require faster-than-human reaction times; (ii) human-machine collaborative decision making; (iii) assisted human operations, especially in combat; (iv) advanced strategies for collaboration between manned and unmanned platforms; and (v) network-enabled, autonomous weapons capable of operating in future cyber and electronic warfare environments. AI, as it is currently understood as a field of “6.1” basic research, will supply enabling technologies for all of these elements. At the same time, none of these elements are dependent on future advances in AGI.

JASON is an independent scientific advisory group that provides consulting services to the U.S. government on matters of defense science and technology. It was established in 1960.

JASON typically performs most of its work during an annual summer study, and has conducted studies under contract to the Department of Defense (frequently DARPA and the U.S. Navy), the Department of Energy, the U.S. Intelligence Community, and the FBI. Approximately half of the resulting JASON reports are unclassified.

DARPA has become the world’s largest funder of “gene drive” research

From US military agency invests $100m in genetic extinction technologies | Science | The Guardian

A US military agency is investing $100m in genetic extinction technologies that could wipe out malarial mosquitoes, invasive rodents or other species, emails released under freedom of information rules show.

The UN Convention on Biological Diversity (CBD) is debating whether to impose a moratorium on the gene research next year and several southern countries fear a possible military application.

and

Gene-drive research has been pioneered by an Imperial College London professor, Andrea Crisanti, who confirmed he has been hired by Darpa on a $2.5m contract to identify and disable such drives.

Human augmentation has, at least at the beginning, a very limited number of very specific use cases. The supersoldier certainly is the top one.

Defeating cancer costs $500,000 

From Genetic Programmers Are the Next Startup Millionaires – MIT Technology Review

Cell Design Labs, founded by University of California, San Francisco, synthetic biologist Wendell Lim, creates “programs” to install inside T cells, the killer cells of the immune system, giving them new abilities.

Known as “CAR-T,” the treatments are both revolutionary and hugely expensive. A single dose is priced at around $500,000 but often results in a cure. Gilead quickly paid $12 billion to acquire Kite Pharma, maker of one of those treatments.

The initial T cell treatments, however, work only with blood cancers.

From FDA Approves Groundbreaking Gene Therapy for Cancer – MIT Technology Review

The FDA calls the treatment, made by Novartis, the “first gene therapy” in the U.S. The therapy is designed to treat an often-lethal type of blood and bone marrow cancer that affects children and young adults. Known as a CAR-T therapy, the approach has shown remarkable results in patients. The one-time treatment will cost $475,000, but Novartis says there will be no charge if a patient doesn’t respond to the therapy within a month.

The therapy, which will be marketed as Kymriah, is a customized treatment that uses a patient’s own T cells, a type of immune cell. A patient’s T cells are extracted and cryogenically frozen so that they can be transported to Novartis’s manufacturing center in New Jersey. There, the cells are genetically altered to have a new gene that codes for a protein—called a chimeric antigen receptor, or CAR. This protein directs the T cells to target and kill leukemia cells with a specific antigen on their surface. The genetically modified cells are then infused back into the patient.

This is less than the $700,000 previously reported, but still a fortune.

In Vivo Target Gene Activation via CRISPR/Cas9-Mediated Trans-epigenetic Modulation

From In Vivo Target Gene Activation via CRISPR/Cas9-Mediated Trans-epigenetic Modulation: Cell

Current genome-editing systems generally rely on inducing DNA double-strand breaks (DSBs). This may limit their utility in clinical therapies, as unwanted mutations caused by DSBs can have deleterious effects. CRISPR/Cas9 system has recently been repurposed to enable target gene activation, allowing regulation of endogenous gene expression without creating DSBs. However, in vivo implementation of this gain-of-function system has proven difficult. Here, we report a robust system for in vivo activation of endogenous target genes through trans-epigenetic remodeling. The system relies on recruitment of Cas9 and transcriptional activation complexes to target loci by modified single guide RNAs. As proof-of-concept, we used this technology to treat mouse models of diabetes, muscular dystrophy, and acute kidney disease. Results demonstrate that CRISPR/Cas9-mediated target gene activation can be achieved in vivo, leading to measurable phenotypes and amelioration of disease symptoms. This establishes new avenues for developing targeted epigenetic therapies against human diseases.

CRISPR can be repurposed to enable target gene activation

From Adapted Crispr gene editing tool could treat incurable diseases, say scientists | The Guardian

The technique is an adapted version of the powerful gene editing tool called Crispr. While the original version of Crispr snips DNA in precise locations to delete faulty genes or over-write flaws in the genetic code, the modified form “turns up the volume” on selected genes.

and

In the new version a Crispr-style guide is still used, but instead of cutting the genome at the site of interest, the Cas9 enzyme latches onto it. The new package also includes a third element: a molecule that homes in on the Cas9 and switches on whatever gene it is attached to.

and

The team showed that mice, with a version of muscular dystophy, a fatal muscle wasting disorder, recovered muscle growth and strength. The illness is caused by a mutation in the gene that produces dystrophin, a protein found in muscle fibres. However, rather than trying to replace this gene with a healthy version, the team boosted the activity of a second gene that produces a protein called utrophin that is very similar to dystrophin and can compensate for its absence.

Of course, once you can activate genes at will, you can also boost a perfectly healthy human in areas where he/she is weak or inept.

Genetic engineering for skill enablement, that is.

Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings

From [1607.06520] Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings

The blind application of machine learning runs the risk of amplifying biases present in data. Such a danger is facing us with word embedding, a popular framework to represent text data as vectors which has been used in many machine learning and natural language processing tasks. We show that even word embeddings trained on Google News articles exhibit female/male gender stereotypes to a disturbing extent. This raises concerns because their widespread use, as we describe, often tends to amplify these biases. Geometrically, gender bias is first shown to be captured by a direction in the word embedding. Second, gender neutral words are shown to be linearly separable from gender definition words in the word embedding. Using these properties, we provide a methodology for modifying an embedding to remove gender stereotypes, such as the association between between the words receptionist and female, while maintaining desired associations such as between the words queen and female. We define metrics to quantify both direct and indirect gender biases in embeddings, and develop algorithms to “debias” the embedding. Using crowd-worker evaluation as well as standard benchmarks, we empirically demonstrate that our algorithms significantly reduce gender bias in embeddings while preserving the its useful properties such as the ability to cluster related concepts and to solve analogy tasks. The resulting embeddings can be used in applications without amplifying gender bias.

Our machines can very easily recognise you among at least 2 billion people in a matter of seconds

From Doctor, border guard, policeman – artificial intelligence in China and its mind-boggling potential to do right, or wrong | South China Morning Post

Yitu’s Dragonfly Eye generic portrait platform already has 1.8 billion photographs to work with: those logged in the national database and you, if you have visited China recently. Yitu will not say whether Hong Kong identity card holders have been logged in the government’s database, for which the company provides navigation software and algor­ithms, but 320 million of the photos have come from China’s borders, including ports and airports, where pictures are taken of everyone who enters and leaves the country.

According to Yitu, its platform is also in service with more than 20 provincial public security departments, and is used as part of more than 150 municipal public security systems across the country, and Dragonfly Eye has already proved its worth. On its very first day of operation on the Shanghai Metro, in January, the system identified a wanted man when he entered a station. After matching his face against the database, Dragonfly Eye sent his photo to a policeman, who made an arrest. In the following three months, 567 suspected lawbreakers were caught on the city’s underground network.

Imagine this performed by a human eye augmented by AR lenses or glasses.

If you think that humans will confine this sort of applications to a computer at your desk or inside your pocket, you are delusional.

% Chinese researchers contribution to best 100 AI journals/conferences

The Eurasia Group and Sinovation Ventures released a report titled China embraces AI: A Close Look and A Long View with some interesting data.

The first bit is a chart that shows how the percentage of Chinese researchers contribution to best 100 AI journals/conferences raised from 23% / 25% (authoring/citations) in 2006 to almost 43% / 56% (authoring/citations) in 2015.

The second bit is a list of Chinese AI startups, divided into research/enabling technology/commercial application categories, which also highlights domestic and foreign investors.

With the massive commitment of the Chinese government, these numbers are bound to grow significantly.

Google open source tool DeepVariant achieves unprecedented accuracy in human genome sequencing

From Google Is Giving Away AI That Can Build Your Genome Sequence | Wired:

On Monday, Google released a tool called DeepVariant that uses deep learning—the machine learning technique that now dominates AI—to assemble full human genomes.

And now, engineers at Google Brain and Verily (Alphabet’s life sciences spin-off) have taught one to take raw sequencing data and line up the billions of As, Ts, Cs, and Gs that make you you.

and

Today, you can get your whole genome for just $1,000 (quite a steal compared to the $1.5 million it cost to sequence James Watson’s in 2008).

But the data produced by today’s machines still only produce incomplete, patchy, and glitch-riddled genomes. Errors can get introduced at each step of the process, and that makes it difficult for scientists to distinguish the natural mutations that make you you from random artifacts, especially in repetitive sections of a genome.

See, most modern sequencing technologies work by taking a sample of your DNA, chopping it up into millions of short snippets, and then using fluorescently-tagged nucleotides to produce reads—the list of As, Ts, Cs, and Gs that correspond to each snippet. Then those millions of reads have to be grouped into abutting sequences and aligned with a reference genome.

That’s the part that gives scientists so much trouble. Assembling those fragments into a usable approximation of the actual genome is still one of the biggest rate-limiting steps for genetics.

and

DeepVariant works by transforming the task of variant calling—figuring out which base pairs actually belong to you and not to an error or other processing artifact—into an image classification problem. It takes layers of data and turns them into channels, like the colors on your television set.

After the FDA contest they transitioned the model to TensorFlow, Google’s artificial intelligence engine, and continued tweaking its parameters by changing the three compressed data channels into seven raw data channels. That allowed them to reduce the error rate by a further 50 percent. In an independent analysis conducted this week by genomics computing platform, DNAnexus, DeepVariant vastly outperformed GATK, Freebayes, and Samtools, sometimes reducing errors by as much as 10-fold.

DeepVariant is now open source and available here: https://github.com/google/deepvariant

Google competes with many other vendors on many fronts. But while his competitors are focused on battling for today’s market opportunities, Google is busy in a solitary race to control the battlefield of the future: the human body.

The human body is the ultimate data center.

I always wondered how it would be if a superior species landed on earth and showed us how they play chess

From Google’s AlphaZero Destroys Stockfish In 100-Game Match – Chess.com

Stockfish, which for most top players is their go-to preparation tool, and which won the 2016 TCEC Championship and the 2017 Chess.com Computer Chess Championship, didn’t stand a chance. AlphaZero won the closed-door, 100-game match with 28 wins, 72 draws, and zero losses.

Oh, and it took AlphaZero only four hours to “learn” chess.

and

“We have always assumed that chess required too much empirical knowledge for a machine to play so well from scratch, with no human knowledge added at all,” Kasparov said. “Of course I’ll be fascinated to see what we can learn about chess from AlphaZero, since that is the great promise of machine learning in general—machines figuring out rules that humans cannot detect. But obviously the implications are wonderful far beyond chess and other games. The ability of a machine to replicate and surpass centuries of human knowledge in complex closed systems is a world-changing tool.”

The progress that DeepMind, and the industry in general, is making in artificial intelligence is breathtaking. Eventually, this feeling of confronting a superior species will become more and more frequent.

The notion of being, for the first time ever, the inferior species is terrifying for most humans. It implies that somebody else can do to us what we do to animals on daily basis. Homo Deus, Yuval Noah Harari new bestseller, drives you to that realization in an amazing way. I can’t recommend it enough.

Google AutoML generates its first outperforming AI child

From Google’s Artificial Intelligence Built an AI That Outperforms Any Made by Humans

In May 2017, researchers at Google Brain announced the creation of AutoML, an artificial intelligence (AI) that’s capable of generating its own AIs. More recently, they decided to present AutoML with its biggest challenge to date, and the AI that can build AI created a “child” that outperformed all of its human-made counterparts.

AutoML acts as a controller neural network that develops a child AI network for a specific task. For this particular child AI, which the researchers called NASNet, the task was recognizing objects — people, cars, traffic lights, handbags, backpacks, etc. — in a video in real-time.

and

NASNet was 82.7 percent accurate at predicting images on ImageNet’s validation set. This is 1.2 percent better than any previously published results, and the system is also 4 percent more efficient, with a 43.1 percent mean Average Precision (mAP)

and

The Google researchers acknowledge that NASNet could prove useful for a wide range of applications and have open-sourced the AI for inference on image classification and object detection. “We hope that the larger machine learning community will be able to build on these models to address multitudes of computer vision problems we have not yet imagined,” they wrote in their blog post.

Though the applications for NASNet and AutoML are plentiful, the creation of an AI that can build AI does raise some concerns. For instance, what’s to prevent the parent from passing down unwanted biases to its child? What if AutoML creates systems so fast that society can’t keep up?

We are waiting to develop a human-level artificial intelligence and see if it will improve itself to the point of becoming a superintelligence. Maybe it’s exceptionally close.

A deeper look into Kernel’s plan to create a brain prosthetic

From Inside the Race to Build a Brain-Machine Interface—and Outpace Evolution | WIRED

The scientists from Kernel are there for a different reason: They work for Bryan Johnson, a 40-year-old tech entrepreneur who sold his business for $800 million and decided to pursue an insanely ambitious dream—he wants to take control of evolution and create a better human. He intends to do this by building a “neuroprosthesis,” a device that will allow us to learn faster, remember more, “coevolve” with artificial intelligence, unlock the secrets of telepathy, and maybe even connect into group minds. He’d also like to find a way to download skills such as martial arts, Matrix-style. And he wants to sell this invention at mass-market prices so it’s not an elite product for the rich.

Right now all he has is an algorithm on a hard drive. When he describes the neuroprosthesis to reporters and conference audiences, he often uses the media-friendly expression “a chip in the brain,” but he knows he’ll never sell a mass-market product that depends on drilling holes in people’s skulls. Instead, the algorithm will eventually connect to the brain through some variation of noninvasive interfaces being developed by scientists around the world, from tiny sensors that could be injected into the brain to genetically engineered neurons that can exchange data wirelessly with a hatlike receiver. All of these proposed interfaces are either pipe dreams or years in the future, so in the meantime he’s using the wires attached to Dickerson’s hippo­campus to focus on an even bigger challenge: what you say to the brain once you’re connected to it.

That’s what the algorithm does. The wires embedded in Dickerson’s head will record the electrical signals that Dickerson’s neurons send to one another during a series of simple memory tests. The signals will then be uploaded onto a hard drive, where the algorithm will translate them into a digital code that can be analyzed and enhanced—or rewritten—with the goal of improving her memory. The algorithm will then translate the code back into electrical signals to be sent up into the brain. If it helps her spark a few images from the memories she was having when the data was gathered, the researchers will know the algorithm is working. Then they’ll try to do the same thing with memories that take place over a period of time, something nobody’s ever done before. If those two tests work, they’ll be on their way to deciphering the patterns and processes that create memories.

Although other scientists are using similar techniques on simpler problems, Johnson is the only person trying to make a commercial neurological product that would enhance memory. In a few minutes, he’s going to conduct his first human test. For a commercial memory prosthesis, it will be the first human test.

Long and detailed report on what Kernel is doing. Really worth your time.

Sangamo Therapeutics attempts to edit a gene inside the body for the first time

From AP Exclusive: US scientists try 1st gene editing in the body

.Scientists for the first time have tried editing a gene inside the body in a bold attempt to permanently change a person’s DNA to cure a disease

The experiment was done Monday in California on 44-year-old Brian Madeux. Through an IV, he received billions of copies of a corrective gene and a genetic tool to cut his DNA in a precise spot

and

Weekly IV doses of the missing enzyme can ease some symptoms, but cost $100,000 to $400,000 a year and don’t prevent brain damage.

Gene editing won’t fix damage he’s already suffered, but he hopes it will stop the need for weekly enzyme treatments.

and

The therapy has three parts: The new gene and two zinc finger proteins. DNA instructions for each part are placed in a virus that’s been altered to not cause infection but to ferry them into cells. Billions of copies of these are given through a vein.

They travel to the liver, where cells use the instructions to make the zinc fingers and prepare the corrective gene. The fingers cut the DNA, allowing the new gene to slip in. The new gene then directs the cell to make the enzyme the patient lacked.

Only 1 percent of liver cells would have to be corrected to successfully treat the disease, said Madeux’s physician and study leader, Dr. Paul Harmatz at the Oakland hospital.

Zinc finger nucleases is a different gene editing tool than CRISPR.

I originally wanted to wait the 3 months necessary to verify if this procedure worked, but this is history in the making, with enormous implications, and I want to have H+ to have it on the record.

I’ll update this article with the results of the therapy once they are disclosed.

It might be possible to treat diseases by giving aging tissues a signal to clean house

From Young Again: How One Cell Turns Back Time – The New York Times

None of us was made from scratch. Every human being develops from the fusion of two cells, an egg and a sperm, that are the descendants of other cells. The lineage of cells that joins one generation to the next — called the germline — is, in a sense, immortal.

Biologists have puzzled over the resilience of the germline for 130 years, but the phenomenon is still deeply mysterious.

Over time, a cell’s proteins become deformed and clump together. When cells divide, they pass that damage to their descendants. Over millions of years, the germline ought to become too devastated to produce healthy new life.

and

On Thursday in the journal Nature, Dr. Bohnert and Cynthia Kenyon, vice president for aging research at Calico, reported the discovery of one way in which the germline stays young.

Right before an egg is fertilized, it is swept clean of deformed proteins in a dramatic burst of housecleaning.

and

Combining these findings, the researchers worked out the chain of events by which the eggs rejuvenate themselves.

It begins with a chemical signal released by the sperm, which triggers drastic changes in the egg. The protein clumps within the egg “start to dance around,” said Dr. Bohnert.

The clumps come into contact with little bubbles called lysosomes, which extend fingerlike projections that pull the clumps inside. The sperm signal causes the lysosomes to become acidic. That change switches on the enzymes inside the lysosomes, allowing them to swiftly shred the clumps.

We are entering a cycle where humans and algorithms are adapting to each other

From Exploring Cooperation with Social Machines:

Humans are filling in the gaps where algorithms cannot easily function, and algorithms are calculating and processing complex information at a speed that for most humans is not possible. Together, humans and computers are sorting out which is going to do what type of task. It is a slow and tedious process that emulates a kind of sociability between entities in order to form cooperative outcomes.

Either one or both parties must yield a bit for cooperation to work, and if a program is developed in a rigid way, the yielding is usually done by the human to varying degrees of frustration as agency (our ability to make choices from a range of options) becomes constrained by the process of automation.

Indeed, sociability and social relationships depend on the assumption of agency on the part of the other, human or machine. Humans often attribute agency to machines in their assumptions underlying how the machine will satisfy their present need, or indeed inhibit them from satisfying a need.

You should also read Implementing Algorithms In The Form Of Scripts Has Been An Early Step In Training Humans To Be More Like Machines

Implementing algorithms in the form of scripts has been an early step in training humans to be more like machines

From Cooperating with Algorithms in the Workplace:

Thus, concerning algorithms at work, people are either replaced by them, required to help them, or have become them. Workplace algorithms have been evolving for some time in the form of scripts and processes that employers have put in place for efficiency, “quality control,” brand consistency, product consistency, experience consistency and most particularly, cost savings. As a result phone calls to services such as hotels, shops and restaurants, may now have a script read out loud or memorized by the employee to the customer to ensure consistent experiences and task compliance.

Consistency of experience is increasingly a goal within organizations, and implementing algorithms in the form of scripts and processes has been an early step in training humans to be more like machines. Unfortunately, these algorithms can result in an inability to cooperate in contexts not addressed by the algorithm. These scripts and corresponding processes purposely greatly restrict human agency by failing to define clear boundaries for the domain of the algorithm and recognizing the need for adaptation outside these boundaries.

Thus, often if a worker is asked a specialized or specific query, they lack the ability to respond to it and will either turn away the customer, or accelerate the query up (and down) a supervisory management chain, with each link bound by its own scripts, processes and rules, which may result in a non-answer or non-resolution for the customer.

Not only the paper is mighty interesting, but the whole body of research it belongs too is worth serious investigation.

Also, this TED Talk by David Lee touches the topic in quite an interesting way: Why jobs of the future won’t feel like work

Nutritional Ketosis Alters Fuel Preference and Thereby Endurance Performance in Athletes

From http://www.cell.com/cell-metabolism/pdfExtended/S1550-4131(16)30355-2:

Ketosis, the metabolic response to energy crisis, is a mechanism to sustain life by altering oxidative fuel selection. Often overlooked for its metabolic potential, ketosis is poorly understood outside of starvation or diabetic crisis. Thus, we studied the biochemical advantages of ketosis in humans using a ketone ester-based form of nutrition without the unwanted milieu of endogenous ketone body production by caloric or carbohydrate restriction.

In five separate studies of 39 high-performance athletes, we show how this unique metabolic state improves physical endurance by altering fuel competition for oxidative respiration. Ketosis decreased muscle glycolysis and plasma lactate concentrations, while providing an alternative substrate for oxidative phosphorylation. Ketosis increased intramuscular triacylglycerol oxidation during exercise, even in the presence of normal muscle glycogen, co-ingested carbohydrate and elevated insulin. These findings may hold clues to greater human potential and a better understanding of fuel metabolism in health and disease.

A drink made of pure Ketone could boost the body more than carbs, fat and protein

From Scientists think they’ve discovered a fourth type of fuel for humans — beyond carbs, fat, and protein | The Independent

To make the product, HVMN leveraged more than a decade and $60 million worth of scientific research through an exclusive partnership with Oxford University.

Most of the food we eat contains carbs. The carbs in fruit come from naturally occurring sugars; those in potatoes, veggies, and pasta come from starch. They’re all ultimately broken down into sugar, or glucose, for energy.

When robbed of carbs, the body turns to fat for fuel.

In the process of digging into its fat stores, the body releases molecules called ketones. A high-fat, low-carb diet (also known as a ketogenic diet) is a shortcut to the same goal.

Instead of going without food, someone on the keto diet tricks the body into believing it is starving by snatching away carbohydrates, its primary source of fuel.

This is why as long as you’re not eating carbs, you can ramp up your intake of fatty foods like butter, steak, and cheese and still lose weight. The body becomes a fat-melting machine, churning out ketones to keep running.

If you could ingest those ketones directly, rather than starving yourself or turning to a keto diet, you could essentially get a superpower.

That performance boost is “unlike anything we’ve ever seen before,” said Kieran Clarke, a professor of physiological biochemistry at Oxford who’s leading the charge to translate her work on ketones and human performance into HVMN’s Ketone.

Neurable has been working on developing brain-control systems for VR for over a year

From Brain-Controlled Typing May Be the Killer Advance That AR Needs – MIT Technology Review

The current speed record for typing via brain-computer interface is eight words per minute, but that uses an invasive implant to read signals from a person’s brain. “We’re working to beat that record, even though we’re using a noninvasive technology,” explains Alcaide. “We’re getting about one letter per second, which is still fairly slow, because it’s an early build. We think that in the next year we can further push that forward.”

He says that by introducing AI into the system, Neurable should be able to reduce the delay between letters and also predict what a user is trying to type.

This would have applications well beyond VR.

We don’t understand yet the brain coding for force

From For Brain-Computer Interfaces to Be Useful, They’ll Need to Be Wireless – MIT Technology Review

Today’s brain-computer interfaces involve electrodes or chips that are placed in or on the brain and communicate with an external computer. These electrodes collect brain signals and then send them to the computer, where special software analyzes them and translates them into commands. These commands are relayed to a machine, like a robotic arm, that carries out the desired action.

The embedded chips, which are about the size of a pea, attach to so-called pedestals that sit on top of the patient’s head and connect to a computer via a cable. The robotic limb also attaches to the computer. This clunky set-up means patients can’t yet use these interfaces in their homes.

In order to get there, Schwartz said, researchers need to size down the computer so it’s portable, build a robotic arm that can attach to a wheelchair, and make the entire interface wireless so that the heavy pedestals can be removed from a person’s head.

The above quote is interesting, especially because the research is ready to be tested but there’s no funding. However, the real value is in the video embedded in the page, where Andrew Schwartz, distinguished professor of neurobiology at the University of Pittsburgh, explains what’s the research frontier for neural interfaces.

AR glasses competition starts to get real

From Daqri ships augmented reality smart glasses for professionals | VentureBeat

At $4,995, the system is not cheap, but it is optimized to present complex workloads and process a lot of data right on the glasses themselves.

and

The Daqri is powered by a Visual Operating System (VOS) and weighs 0.7 pounds. The glasses have a 44-degree field of view and use an Intel Core m7 processor running at 3.1 gigahertz. They run at 90 frames per second and have a resolution of 1360 x 768. They also connect via Bluetooth or Wi-Fi and have sensors such as a wide-angle tracking camera, a depth-sensing camera, and an HD color camera for taking photos and videos.

Olympus just presented a competing product for $1500.

Olympus EyeTrek is a $1,500 open-source, enterprise-focused smart glasses product

From Olympus made $1,500 open-source smart glasses – The Verge

The El-10 can be mounted on all sorts of glasses, from regular to the protective working kind. It has a tiny 640 x 400 OLED display that, much like Google Glass, sits semi-transparently in the corner of your vision when you wear the product on your face. A small forward-facing camera can capture photos and videos, or even beam footage back to a supervisor in real time. The El-10 runs Android 4.2.2 Jelly Bean and comes with only a bare-bones operating system, as Olympus is pushing the ability to customize it

It’s really cool that it can be mounted on any pair of glasses. Olympus provides clips of various sizes to adjust to multiple frames. It weights 66g.

The manual mentions multiple built-in apps: image and video players, a camera (1280x720px), a video recorder (20fps, up to 30min recording), and the QR scanner. It connects to other things via Bluetooth or wireless network.

You can download the Software Development Kit here.
It includes a Windows program to develop new apps, an Android USB driver, an Android app to generate QR codes, and a couple of sample apps.

What is consciousness, and could machines have it?

From What is consciousness, and could machines have it? | Science

The controversial question of whether machines may ever be conscious must be based on a careful consideration of how consciousness arises in the only physical system that undoubtedly possesses it: the human brain. We suggest that the word “consciousness” conflates two different types of information-processing computations in the brain: the selection of information for global broadcasting, thus making it flexibly available for computation and report (C1, consciousness in the first sense), and the self-monitoring of those computations, leading to a subjective sense of certainty or error (C2, consciousness in the second sense). We argue that despite their recent successes, current machines are still mostly implementing computations that reflect unconscious processing (C0) in the human brain. We review the psychological and neural science of unconscious (C0) and conscious computations (C1 and C2) and outline how they may inspire novel machine architectures.