From Red Dot Award Winning Tools: Univet 5.0 Augmented Reality Safety Glasses – Core77
Italian safety equipment manufacturer Univet just received a 2017 Red Dot Award for its augmented reality safety glasses. The glasses integrate Sony’s “holographic waveguide technology” into eye protection that allows wearers to view real time data without looking up from what they are doing.
A monocular projection system displays data on a holographic screen behind the right protective lens. The screen is clear so the wearer can see through it.
The glasses can use WiFi or Bluetooth to access data on computers, tables, and smartphones. Information is able to travel in both directions so data can be collected from internal sensors such as the GPS and microphone or optional sensors such as thermometers and cameras.
Take a look at the pictures and videos.
From Scientists unveil ultra-thin electronics that can dissolve into the body
the team’s inventions include a biodegradable semi-conductive polymer, disintegrable and flexible electronic circuits, and a biodegradable substrate material for mounting these electrical components onto.
Totally flexible and biocompatible, the ultra-thin film substrate allows the components to be mounted onto both rough and smooth surfaces.
All together, the components can be used to create biocompatible, ultra-thin, lightweight and low-cost electronics for applications as diverse as wearable electronics to large-scale environmental surveys.
Maybe this is one of the many approaches we’ll use for biohacking or as wearable technology in the future.
From Fernand Léger and the Rise of the Man-Machine
Early in life, Léger embraced a transcendent, quasi-Futurist love of technological energy along with the Cubist notion of putting the squeeze on: fracturing objects into sharp geometric shapes. But soon, his brand of Cubism evolved into an automaton-esque figurative style distinguished by his focus on cylindrical forms. These cylindrical android figures express a synchronization between human and machine that is most relevant today given the coming artificial intelligence workplace. When we look at Léger with new eyes, we see that he sought to express the noise, dynamism, and speed of the new technology machinery in which he and we find ourselves immersed.
The military theme in Léger’s oeuvre remains prominent in post-WWI paintings like “La partie de cartes” (“The Card Game,” 1917), where three decorated soldiers tumble out of themselves and ripple across the picture plane, merging into an ecstatic ménage à trois through the interlacing repetitions of their machine forms. This post-flesh, man-machine unanimity could be seen as a precedent, suggestive of our current post-human condition.
The resplendent mélange of seated tin men — all evocative of that famous one from Oz — displays a frantic, cybernetic logic in terms of the painting’s visual tactility, with once lumpen and deadlocked male forms set flowing in jerks and spasms across the surface. The artist has systematically imposed on them a vibrating restlessness through labyrinthine extensions and doublings, making their flesh undergo steps of annihilation into transubstantiation. The composition’s flickering, staccato repetitions create the impression of a rolling bacchanalia where human forms transcend their fleshiness and extend themselves through motorized re-embodiment. Léger seems to suggest that the truth of life is found not through chance, as one might glean from the card game, but through the technological apparatus of bodies tumbling into a field of circuits.
From The shock tactics set to shake up immunology : Nature
The human vagus nerve contains around 100,000 individual nerve fibres, which branch out to reach various organs. But the amount of electricity needed to trigger neural activity can vary from fibre to fibre by as much as 50-fold.
Yaakov Levine, a former graduate student of Tracey’s, has worked out that the nerve fibres involved in reducing inflammation have a low activation threshold. They can be turned on with as little as 250-millionths of an amp — one-eighth the amount often used to suppress seizures. And although people treated for seizures require up to several hours of stimulation per day, animal experiments have suggested that a single, brief shock could control inflammation for a long time10. Macrophages hit by acetylcholine are unable to produce TNF-α for up to 24 hours, says Levine, who now works in Manhasset at SetPoint Medical, a company established to commercialize vagus-nerve stimulation as a medical treatment.
By 2011, SetPoint was ready to try the technique in humans, thanks to animal studies and Levine’s optimization efforts. That first trial was overseen by Paul-Peter Tak, a rheumatologist at the University of Amsterdam and at the UK-based pharmaceutical company GlaxoSmithKline. Over the course of several years, 18 people with rheumatoid arthritis have been implanted with stimulators, including Katrin.
For the images of the actual device, check Core77. They also have implantable bioelectronic devices.
From This Glove Translates Sign Language Into Speech
Ayoub, who is currently a Ph.D. researcher at Goldsmiths College in London, designed the glove for anyone who relies on sign language to communicate, from deaf people to children who have non-verbal autism and communicate through gestures. To use it, you simply put the glove on and start signing. The glove translates the signs in real time into sentences that appear on a small screen on the wrist, which can then be read out loud using a small speaker.
Watch the video.
From Someday You Could Swallow This Wireless Robot Like A Pill
These little robots are powered by an electromagnetic field, similar to how you would wirelessly charge a cell phone. By changing the frequency of the magnetic field, the researchers are able to precisely control the exact movement of their prototype.
For instance, one triangular robot that’s no bigger than a quarter is composed of three triangular pieces of thin plastic, attached with hinges to a middle triangle that has a circuit. The hinges are controlled by coils of a metal called “shape-memory alloy,” which changes its form when it’s exposed to heat. When an electric current starts running through the central circuit, these coils heat up and contract, causing the three triangles to fold up toward the center of the robot. When the current stops, the hinges return to their flat state.
From A Wearable Chair Designed to Improve Working Conditions that Involve Manual Labor – Core77
The Chairless Chair® is a flexible wearable ergonomic sitting support designed by Sapettiand produced by the Swiss-based company noonee.
The main application of the Chairless Chair® is for manufacturing companies, where workers are required to stand for long periods of time and traditional sitting methods are not suitable, leading to obstacles in the work area. While wearing the Chairless Chair, users walk together with sitting support without obstructing the work space. The position also avoids strenuous postures such as bending, squatting or crouching.
I wonder if the device impedes the act of running, in case of emergency.
From Multi-functional Flexible Aqueous Sodium-Ion Batteries with High Safety
The development of wearable and implantable electrical devices has been in great demand recently. However, most existing energy storage systems are based on strong corrosive or toxic electrolytes, posing a huge safety hazard as a result of solution leakage.
Here, we have developed a family of safe and flexible belt- and fiber-shaped aqueous sodium-ion batteries (SIBs) by using various Na+-containing aqueous electrolytes, including Na2SO4 solution, normal saline, and cell-culture medium. The resulting SIBs exhibit high flexibility and excellent electrochemical performance and can be safely applied in wearable electronics. Flexible SIBs with normal saline or cell-culture medium as the electrolyte showed excellent performance, indicating potential application in implantable electronic devices.
In addition, the fiber-shaped electrode in normal saline or cell-culture medium electrolyte can consume O2 and change the pH, implying promising application in biological and medical investigations.
From Go ahead and cry — your tears might power the batteries of the future – The Verge
In a lot of the flexible batteries out there, these electrolyte solutions are made out of strong acids or toxic chemicals, the study says. That stuff is corrosive, flammable, or toxic, and you definitely don’t want it dribbling onto or into your body. That’s why scientists at Fudan University in China came up with a way to replace these toxic electrolyte solutions with something much less harmful.
The researchers experimented with a few different types of electrolyte solutions. The one that worked best was sodium sulfate, which is sometimes used as a laxative. But saline solutions, which are literally diluted salt water, also worked well. Eventually, bodily fluids like blood, sweat, or tears might take over the roll of the electrolyte solution to power medical implants, the study says.
From What Comes After Wearables? Try “Invisibles”
a group of researchers at MIT have developed a remote sleep sensing system that uses radio waves to capture data about your brain waves while you sleep–and AI to read them–without ever touching your body. It consists of a laptop-sized wireless device that emits radio signals. When put in the user’s bedroom, the waves detect even the slightest movement of the body. The system doesn’t just do the job of a sleep-tracking wearable without the wearable; it also just provides data at a similar level of accuracy as a sleep lab.
In order to cut out all the extraneous information her system records, she developed a machine learning algorithm that can extract sleep stages–light, deep, and REM sleep–out of the mess of data. The algorithm was trained on a sleep dataset of 25 individuals for a total of 100 nights of sleep, taken using an FDA-approved device that uses EEG to record brain waves.
The results are 80% accurate
From Viome raises $15 million to analyze your gut and give you health tips | VentureBeat
Viome, a startup that does RNA analysis of all living organisms in the gut, today announced funding of $15 million to create unique molecular profiles for its customers by identifying and quantifying all the microorganisms that live in the gut. These include bacteria, viruses, yeast, mold, and fungus.
Viome uses artificial intelligence to analyze these results and figure out what’s going on in your gut — certain imbalances can cause chronic illnesses, according to Viome’s cofounder and CEO, Naveen Jain.
“There are other companies out there that can analyze your microbiome, but they use 16S testing, which only looks at a portion of bacteria and only at a genus level (any two people have 95 percent similarity in their microbiome at a genus level),” Jain wrote in an email to VentureBeat. “We look at all living organisms at a strain level and also understand what they are doing.”
From This exoskeleton can be controlled using Amazon’s Alexa – The Verge
Bionik Laboratories says it’s the first to add the digital assistant to a powered exoskeleton. The company has integrated Alexa with its lower-body Arke exoskeleton, allowing users to give voice commands like “Alexa, I’m ready to stand” or “Alexa, take a step.”
Movement of the Arke, which is currently in clinical development, is usually controlled by an app on a tablet or by reacting automatically to users’ movements. Sensors in the exoskeleton detect when the wearer shifts their weight, activating the motors in the backpack that help the individual move. For Bionik, adding Alexa can help individuals going through rehabilitation get familiar with these actions.
Voice-controlled exoskeleton is an interesting way to overcome the complexity of creating sophisticated brain-machine interfaces, but current technology has a lot of limitations. For example, Alexa doesn’t have yet voice fingerprinting, so anybody in the room could, maliciously or not, utter a command on behalf of the user and harm that person with an undesired exoskeleton movement at the wrong time.
Nonetheless, these are valuable baby steps. If you are interested in Bionik Laboratories, you can see a lot more in their on-stage presentation at IBM Insight conference in 2015.
Did you know that the wheelchair was invented 1500 years ago?
From PathNet: Evolution Channels Gradient Descent in SuperNeural Networks
For artificial general intelligence (AGI) it would be efficient if multiple users trained the same giant neural network, permitting parameter reuse, without catastrophic forgetting.
PathNet is a first step in this direction. It is a neural network algorithm that uses agents embedded in the neural network whose task is to discover which parts of the network to re-use for new tasks.
Agents are pathways (views) through the network which determine the subset of parameters that are used and updated by the forwards and backwards passes of the backpropogation algorithm. During learning, a tournament selection genetic algorithm is used to select pathways through the neural network for replication and mutation. Pathway fitness is the performance of that pathway measured according to a cost function.
We demonstrate successful transfer learning; fixing the parameters along a path learned on task A and re-evolving a new population of paths for task B, allows task B to be learned faster than it could be learned from scratch or after fine-tuning. Paths evolved on task B re-use parts of the optimal path evolved on task A.
From Prodigy: A new tool for radically efficient machine teaching | Explosion AI
Prodigy addresses the big remaining problem: annotation and training. The typical approach to annotation forces projects into an uncomfortable waterfall process. The experiments can’t begin until the first batch of annotations are complete, but the annotation team can’t start until they receive the annotation manuals. To produce the annotation manuals, you need to know what statistical models will be required for the features you’re trying to build. Machine learning is an inherently uncertain technology, but the waterfall annotation process relies on accurate upfront planning. The net result is a lot of wasted effort.
Prodigy solves this problem by letting data scientists conduct their own annotations, for rapid prototyping. Ideas can be tested faster than the first planning meeting could even be scheduled. We also expect Prodigy to reduce costs for larger projects, but it’s the increased agility we’re most excited about. Data science projects are said to have uneven returns, like start-ups: a minority of projects are very successful, recouping costs for a larger number of failures. If so, the most important problem is to find more winners. Prodigy helps you do that, because you get to try things much faster.
From We Will End Disability by Becoming Cyborgs – IEEE Spectrum
It’s quite possible that Alzheimer’s patients of the future will be equipped with memory prosthetics derived from the devices being invented in Berger’s lab today. His work began with delicate electrodes inserted into a rat’s hippocampus, the brain structure responsible for encoding memory. Berger first deciphered the relationship between the input signals from neurons that process a brief learning experience—for example, which lever a rat should press to gain a sip of sugar water—and the output signals from neurons that send the information on to be stored as a memory.
Once he had mapped the correlations between the two electrical patterns, Berger could record an input signal and predict the output signal—in other words, the memory. He didn’t need to know which part of the input pattern coded for the dimensions of the lever or for the taste of the sweet reward. He simply mathematically generated the output signal and sent it to the memory-storage neurons. “It’s like translating Russian to Chinese when you don’t know either language,” Berger says. “We don’t want to know either language; we just want to know how this pattern becomes that pattern.”
Berger proved that he could implant the memory of the lever-and-reward test in a rat with a damaged hippocampus that was unable to form memories on its own. Even more remarkable, he implanted the memory in a rat that had never before undergone the test or seen the levers. The rat entered the test chamber for the first time, pressed the correct lever, and sucked down the sweet nectar.
From Bioresorbable silicon electronic sensors for the brain – Nature
Many procedures in modern clinical medicine rely on the use of electronic implants in treating conditions that range from acute coronary events to traumatic injury. However, standard permanent electronic hardware acts as a nidus for infection: bacteria form biofilms along percutaneous wires, or seed haematogenously, with the potential to migrate within the body and to provoke immune-mediated pathological tissue reactions. The associated surgical retrieval procedures, meanwhile, subject patients to the distress associated with re-operation and expose them to additional complications.
Here, we report materials, device architectures, integration strategies, and in vivo demonstrations in rats of implantable, multifunctional silicon sensors for the brain, for which all of the constituent materials naturally resorb via hydrolysis and/or metabolic action, eliminating the need for extraction. Continuous monitoring of intracranial pressure and temperature illustrates functionality essential to the treatment of traumatic brain injury; the measurement performance of our resorbable devices compares favourably with that of non-resorbable clinical standards.
In our experiments, insulated percutaneous wires connect to an externally mounted, miniaturized wireless potentiostat for data transmission. In a separate set-up, we connect a sensor to an implanted (but only partially resorbable) data-communication system, proving the principle that there is no need for any percutaneous wiring. The devices can be adapted to sense fluid flow, motion, pH or thermal characteristics, in formats that are compatible with the body’s abdomen and extremities, as well as the deep brain, suggesting that the sensors might meet many needs in clinical medicine.
From Is Elysium Health’s Basis the Fountain of Youth? — Science of Us
Others who’d taken Basis before me had described effects including fingernail growth, hair growth, skin smoothness, crazy dreams, increased stamina, better sleep, and more energy. Once I began taking it, I did feel an almost jittery uptick in mojo for a few days, and I slept more soundly as well. Then those effects seemed to recede, and there were also mornings where I felt a little out of it. If these were placebo effects, they were weird ones, because they didn’t make me feel better, only different.
Because the two active compounds in Basis, pterostilbene and NR, are natural (occurring in blueberries and milk, respectively) and have long been available separately as supplements, Elysium has been able to skip the FDA gauntlet and sell its capsules immediately.
The agility that comes with bypassing federal regulation has an obvious cost: Guarente and his advisory board are the only scientific credibility Elysium can claim. The company stresses that it is using only compounds supported by hundreds of peer-reviewed papers, that it enforces high manufacturing standards, and that it is conducting a human trial (currently 120 people between the ages of 60 and 80 are participating).
but most importantly
A large number of men who have made fortunes in Silicon Valley believe so — or at least are trying to recast aging as merely another legacy system in need of recoding. Oracle co-founder Larry Ellison’s Ellison Medical Foundation has spent more than $400 million on aging research. In 2013, Alphabet’s Larry Page announced a moonshot life-extension project called Calico, and XPrize founder Peter Diamandis partnered with genome sequencer J. Craig Venter to found a competing company called Human Longevity Inc. Paul F. Glenn, an 85-year-old venture capitalist who watched his grandfather die of cancer, launched an aging-science foundation more than 50 years ago that has since funded a dozen aging-research centers around the country. Peter Thiel is 37 years Glenn’s junior but equally desperate to find a death cure: He has given at least $3 million to the Methuselah Foundation, the research vehicle for the extravagantly bearded, Barnumesque immortality promoter Aubrey de Grey. Thiel has also said he takes a daily dose of human growth hormone, and he was reported to have seriously explored the transfusion of blood from the young to the old.
From Joseph Redmon: How computers learn to recognize objects instantly | TED.com
Joseph Redmon works on the YOLO (You Only Look Once) system, an open-source method of object detection that can identify objects in images and video — from zebras to stop signs — with lightning-quick speed. In a remarkable live demo, Redmon shows off this important step forward for applications like self-driving cars, robotics and even cancer detection.
A few years ago, on my personal Twitter account, I suggested that Google side benefit of owning YouTube would be having the largest archive of human activities on video to train its AI. What Redmon did here is what I had in mind at that time.
By the way, the demonstration during the TED talk is impressive.
From Ray Kurzweil: Get ready for hybrid thinking | TED.com
Two hundred million years ago, our mammal ancestors developed a new brain feature: the neocortex. This stamp-sized piece of tissue (wrapped around a brain the size of a walnut) is the key to what humanity has become. Now, futurist Ray Kurzweil suggests, we should get ready for the next big leap in brain power, as we tap into the computing power in the cloud.
Speaking of AI augmenting human intelligence rather than replacing, Ray Kurzweil popularized the idea in 2014 suggesting that nanorobotics could do the trick in just a few decades.
Remember that he works for Google.
From Tom Gruber: How AI can enhance our memory, work and social lives | TED.com
Tom Gruber, co-creator of Siri, wants to make “humanistic AI” that augments and collaborates with us instead of competing with (or replacing) us. He shares his vision for a future where AI helps us achieve superhuman performance in perception, creativity and cognitive function — from turbocharging our design skills to helping us remember everything we’ve ever read and the name of everyone we’ve ever met.
The video is short but gives a very clear idea of how Apple is thinking about AI and what the future applications could be.
From Drug-Carrying “Nanoswimmers” Could Slither Past the Brain’s Cellular Defenses – Scientific American
An international team of researchers has developed miniscule, self-propelled devices that mimic the way cells move. These “nanoswimmers” cross the blood–brain barrier highly efficiently, and could lead to the development of drug delivery systems that navigate through tissues and organs to target specific sites precisely.
…penetrating the blood–brain barrier, which prevents microbes, toxins and large molecules from entering the brain, has proved hugely challenging. One major goal is to develop self-guided polymersomes that traverse this barrier to deliver their cargo to a specific brain area.
From What an artificial intelligence researcher fears about AI
…as AI designs get even more complex and computer processors even faster, their skills will improve. That will lead us to give them more responsibility, even as the risk of unintended consequences rises. We know that “to err is human,” so it is likely impossible for us to create a truly safe system.
We could set up our virtual environments to give evolutionary advantages to machines that demonstrate kindness, honesty and empathy. This might be a way to ensure that we develop more obedient servants or trustworthy companions and fewer ruthless killer robots.
As researchers, and as a society, we have not yet come up with a clear idea of what we want AI to do or become. In part, of course, this is because we don’t yet know what it’s capable of.
Wonderful blog post. Artificial intelligence experts face scientific, legal, moral and ethical dilemmas like no other expert before in our history.
From Correction of a pathogenic gene mutation in human embryos – Nature
Genome editing has potential for the targeted correction of germline mutations. Here we describe the correction of the heterozygous MYBPC3 mutation in human preimplantation embryos with precise CRISPR–Cas9-based targeting accuracy and high homology-directed repair efficiency by activating an endogenous, germline-specific DNA repair response.
Induced double-strand breaks (DSBs) at the mutant paternal allele were predominantly repaired using the homologous wild-type maternal gene instead of a synthetic DNA template. By modulating the cell cycle stage at which the DSB was induced, we were able to avoid mosaicism in cleaving embryos and achieve a high yield of homozygous embryos carrying the wild-type MYBPC3 gene without evidence of off-target mutations.
From Omega Ophthalmics is an eye implant platform with the power of continuous AR | TechCrunch
… lens implants aren’t a new thing. Implanted lenses are commonly used as a solve for cataracts and other degenerative diseases mostly affecting senior citizens; about 3.6 million patients in the U.S. get some sort of procedure for the disease every year.
Cataract surgery involves removal of the cloudy lens and replacing it with a thin artificial type of lens. Co-founder and board-certified ophthalmologist Gary Wortz saw an opportunity here to offer not just a lens but a platform to which other manufacturers could add different interactive sensors, drug delivery devices and the inclusion of AR/VR integration.
Maybe there’s a surprisingly large audience among the over 60 that is willing to try and get a second youth through biohacking. Maybe over 60s will become the first true augmented humans.
From Nanorobots: Where We Are Today and Why Their Future Has Amazing Potential
Nanotechnology is the science, engineering, and technology conducted at the nanoscale, which is about 1 to 100 nanometers.
Essentially, it’s manipulating and controlling materials at the atomic and molecular level.
To give you perspective, here’s how to visualize a nanometer:
- The ratio of the Earth to a child’s marble is roughly the ratio of a meter to a nanometer.
- It is a million times smaller than the length of an ant.
- A sheet of paper is about 100,000 nanometers thick.
- A red blood cell is about 7,000-8,000 nanometers in diameter.
- A strand of DNA is 2.5 nanometers in diameter.
A nanorobot, then, is a machine that can build and manipulate things precisely at an atomic level.
From Legions of nanorobots target cancerous tumors with precision: Administering anti-cancer drugs redefined — ScienceDaily
These legions of nanorobotic agents were actually composed of more than 100 million flagellated bacteria — and therefore self-propelled — and loaded with drugs that moved by taking the most direct path between the drug’s injection point and the area of the body to cure,” explains Professor Sylvain Martel, holder of the Canada Research Chair in Medical Nanorobotics and Director of the Polytechnique Montréal Nanorobotics Laboratory, who heads the research team’s work. “The drug’s propelling force was enough to travel efficiently and enter deep inside the tumours.”
When they enter a tumour, the nanorobotic agents can detect in a wholly autonomous fashion the oxygen-depleted tumour areas, known as hypoxic zones, and deliver the drug to them. This hypoxic zone is created by the substantial consumption of oxygen by rapidly proliferative tumour cells.
From Super-intelligence and eternal life: transhumanism’s faithful follow it blindly into a future for the elite
One problem is that a highly competitive social environment doesn’t lend itself to diverse ways of being. Instead it demands increasingly efficient behaviour. Take students, for example. If some have access to pills that allow them to achieve better results, can other students afford not to follow? This is already a quandary. Increasing numbers of students reportedly pop performance-enhancing pills. And if pills become more powerful, or if the enhancements involve genetic engineering or intrusive nanotechnology that offer even stronger competitive advantages, what then? Rejecting an advanced technological orthodoxy could potentially render someone socially and economically moribund (perhaps evolutionarily so), while everyone with access is effectively forced to participate to keep up.
From AI Is Inventing Languages Humans Can’t Understand. Should We Stop It?
At first, they were speaking to each other in plain old English. But then researchers realized they’d made a mistake in programming.
“There was no reward to sticking to English language,” says Dhruv Batra, visiting research scientist from Georgia Tech at Facebook AI Research (FAIR). As these two agents competed to get the best deal–a very effective bit of AI vs. AI dogfighting researchers have dubbed a “generative adversarial network”–neither was offered any sort of incentive for speaking as a normal person would. So they began to diverge, eventually rearranging legible words into seemingly nonsensical sentences.
Should we allow AI to evolve its dialects for specific tasks that involve speaking to other AIs? To essentially gossip out of our earshot? Maybe; it offers us the possibility of a more interoperable world, a more perfect place where iPhones talk to refrigerators that talk to your car without a second thought.
The tradeoff is that we, as humanity, would have no clue what those machines were actually saying to one another.
What if artificial intelligence would help humans to develop a more efficient, universal language?
From Timekettle’s WT2 real-time translation earpieces enable ordinary conversation across language barriers | TechCrunch
When you speak in English, there’s a short delay and then your interlocutor hears it in Mandarin Chinese (or whatever other languages are added later). They respond in Chinese, and you hear it in English — it’s really that simple.
The main issue I had was with the latency, which left Wells and I staring at each other silently for a three count while the app did its work. But the version I used wasn’t optimized for latency, and the team is hard at work reducing it.
“We’re trying to shorten the latency to 1-3 seconds, which needs lots of work in optimization of the whole process of data transmission between the earphones, app and server,”
From First Human Embryos Edited in U.S. – MIT Technology Review
The first known attempt at creating genetically modified human embryos in the United States has been carried out by a team of researchers in Portland, Oregon, MIT Technology Review has learned. The effort, led by Shoukhrat Mitalipov of Oregon Health and Science University, involved changing the DNA of a large number of one-cell embryos with the gene-editing technique CRISPR
To date, three previous reports of editing human embryos were all published by scientists in China.
Now Mitalipov is believed to have broken new ground both in the number of embryos experimented upon and by demonstrating that it is possible to safely and efficiently correct defective genes that cause inherited diseases.
One week later, additional details emerge.
From US scientists have corrected a genetic heart mutation in embryos using CRISPR | TechCrunch
Shoukhrat Mitalipov and his colleagues from Oregon Health and Science University have successfully used the CRISPR Cas9 gene editing technology to wipe out a genetically inherited heart mutation in embryos.
Mitalipov and his colleagues were able to avoid the previous mistakes made by the Chinese scientists by injecting the Cas9 enzyme (which acts as a sort of scissors for DNA fragments) into the sperm and eggs at the same time.
What an incredible moment in history to witness.
From Amazon 1492: secret health tech project
The stealth team, which is headquartered in Seattle, is focused on both hardware and software projects, according to two people familiar. Amazon has become increasingly interested in exploring new business in healthcare. For example, Amazon has another unit exploring selling pharmaceuticals, CNBC reported in May.
The new team is currently looking at opportunities that involve pushing and pulling data from legacy electronic medical record systems. If successful, Amazon could make that information available to consumers and their doctors.
1492 Conquer of Paradise.
I wouldn’t be surprised if long-term goal of this unit would be, just like for Google’s Verily, genetic engineering and anti-aging medical research.
From Colour-shifting electronic skin could have wearable tech and prosthetic uses – IOP Publishing
researchers in China have developed a new type of user-interactive electronic skin, with a colour change perceptible to the human eye, and achieved with a much-reduced level of strain. Their results could have applications in robotics, prosthetics and wearable technology.
…the study from Tsinghua University in Beijing, employed flexible electronics made from graphene, in the form of a highly-sensitive resistive strain sensor, combined with a stretchable organic electrochromic device.
Long piece published by The Verge on the biohacking scene and how these early days devices stop working within a handful of years.
From IBM News room – 2017-07-21 IBM and University of Alberta Publish New Data on Machine Learning Algorithms to Help Predict Schizophrenia
In the paper, researchers analyzed de-identified brain functional Magnetic Resonance Imaging (fMRI) data from the open data set, Function Biomedical Informatics Research Network (fBIRN) for patients with schizophrenia and schizoaffective disorders, as well as a healthy control group. fMRI measures brain activity through blood flow changes in particular areas of the brain.
Specifically, the fBIRN data set reflects research done on brain networks at different levels of resolution, from data gathered while study participants conducted a common auditory test. Examining scans from 95 participants, researchers used machine learning techniques to develop a model of schizophrenia that identifies the connections in the brain most associated with the illness.
From A Wisconsin company will let employees use microchip implants to buy snacks and open doors – The Verge
A Wisconsin company called Three Square Market is going to offer employees implantable chips to open doors, buy snacks, log in to computers, and use office equipment like copy machines. Participating employees will have the chips, which use near field communication (NFC) technology, implanted between their thumb and forefinger.
They’re essentially an extension of the chips you’d find in contactless smart cards or microchipped pets: passive devices that store very small amounts of information.
From Orii smart ring turns your fingertip into a Bluetooth earpiece
you wear the ring on your index finger, and when it vibrates with an incoming call, simply lift your hand up, touch your fingertip on a sweet spot just before your ear, then chat away. An earlier crowdfunding project, the Sgnl smart strap (formerly TipTalk) by Korea’s Innomdle Lab, had the same idea, but it has yet to ship to backers long after its February target date this year.
The Orii is essentially an aluminum ring melded to a small package containing all the electronics. The main body on the latest working prototype came in at roughly 30 mm long, 20 mm wide and 12 mm thick.
From Khosla Ventures leads $50 million investment in Vicarious’ AI tech | VentureBeat | Entrepreneur | by Bérénice Magistretti
The Union City, California-based startup is using computational neuroscience to build better machine learning models that help robots quickly address a wide variety of tasks. Vicarious focuses on the neocortex, a part of the brain concerned with sight and hearing.
“We aren’t trying to emulate the brain exactly,” wrote Vicarious cofounder and CEO Scott Phoenix, in an email to VentureBeat. “A good way to think about it is airplanes and birds. When building a plane, you want to borrow relevant features from birds, like low body weight and deformable wings, without getting into irrelevant details like feather colors and flapping.”
I think this quote is deeply inspired by the book Superintelligence by Nick Bostrom. Which is not surprising as Vicarous is trying to build the holy grail of AI: an artificial general intelligence.
They have the most impressive list of investors I have seen in a long time.
From Inflammation-free, gas-permeable, lightweight, stretchable on-skin electronics with nanomeshes : Nature Nanotechnology : Nature Research
Thin-film electronic devices can be integrated with skin for health monitoring and/or for interfacing with machines. Minimal invasiveness is highly desirable when applying wearable electronics directly onto human skin. However, manufacturing such on-skin electronics on planar substrates results in limited gas permeability. Therefore, it is necessary to systematically investigate their long-term physiological and psychological effects.
As a demonstration of substrate-free electronics, here we show the successful fabrication of inflammation-free, highly gas-permeable, ultrathin, lightweight and stretchable sensors that can be directly laminated onto human skin for long periods of time, realized with a conductive nanomesh structure. A one-week skin patch test revealed that the risk of inflammation caused by on-skin sensors can be significantly suppressed by using the nanomesh sensors.
Furthermore, a wireless system that can detect touch, temperature and pressure is successfully demonstrated using a nanomesh with excellent mechanical durability. In addition, electromyogram recordings were successfully taken with minimal discomfort to the user.
From ReWalk Robotics shows off a soft exosuit designed to bring mobility to stroke patients | TechCrunch
The version on display is still a prototype, but all of the functionality is in place, using a motorized pulley system to bring mobility to legs impacted by stroke.
The device, now known as the Restore soft-suit, relies on a motor built into a waistband that controls a pair of cables that operate similarly to bicycle brakes, lifting a footplate in the shoe and moving the whole leg in the process. The unaffected leg, meanwhile, has sensors that measure the wearer’s gait while walking, syncing up the two legs’ movement.
From Smart Contact Lenses – How Far Away Are They? – Nanalyze
The idea of smart contact lenses isn’t as far away as you might think. The first problem that crops up is how exactly do we power the electronics in a set of “smart” contact lenses. As it turns out, we can use the energy of motion or kinetic energy. Every time the eye blinks, we get some power. Now that we have the power problem solved, there are at least several applications we can think of in order of easiest first:
- Level 1 – Multifocal contact lenses like these from Visioneering Technologies, Inc. (VTI) or curing color blindness like these smart contact lenses called Colormax
- Level 2 – Gathering information from your body – like glucose monitoring for diabetics
- Level 3 – Augmenting your vision with digital overlay
- Level 4 – Complete virtual reality (not sure if this is possible based on the eye symmetry but we can dream a dream)
So when we ask the question “how far away are we from having smart contact lenses” the answer isn’t that simple. The first level we have already achieved.
From This super-stretchy wearable feels like a second skin and can record data – The Verge
scientists have created a super-thin wearable that can record data through skin. That would make this wearable, which looks like a stylish gold tattoo, ideal for long-term medical monitoring — it’s already so comfortable that people forgot they were wearing it.
Most skin-based interfaces consist of electronics embedded in a substance, like plastic, that is then stuck onto the skin. Problem is, the plastic is often rigid or it doesn’t let you move and sweat. In a paper published today in the journal Nature Nanotechnology, scientists used a material that dissolves under water, leaving the electronic part directly on the skin and comfortable to bend and wear.
From Google Glass 2.0 Is a Startling Second Act | WIRED
Companies testing EE—including giants like GE, Boeing, DHL, and Volkswagen—have measured huge gains in productivity and noticeable improvements in quality. What started as pilot projects are now morphing into plans for widespread adoption in these corporations. Other businesses, like medical practices, are introducing Enterprise Edition in their workplaces to transform previously cumbersome tasks.
For starters, it makes the technology completely accessible for those who wear prescription lenses. The camera button, which sits at the hinge of the frame, does double duty as a release switch to remove the electronics part of unit (called the Glass Pod) from the frame. You can then connect it to safety glasses for the factory floor—EE now offers OSHA-certified safety shields—or frames that look like regular eyewear. (A former division of 3M has been manufacturing these specially for Enterprise Edition; if EE catches on, one might expect other frame vendors, from Warby Parker to Ray-Ban, to develop their own versions.)
Other improvements include beefed-up networking—not only faster and more reliable wifi, but also adherence to more rigorous security standards—and a faster processor as well. The battery life has been extended—essential for those who want to work through a complete eight-hour shift without recharging. (More intense usage, like constant streaming, still calls for an external battery.) The camera was upgraded from five megapixels to eight. And for the first time, a red light goes on when video is being recorded.
If Glass EE gains traction, and I believe so if it evolves into a platform for enterprise apps, Google will gain a huge amount of information and experience that can reuse on the AR contact lenses currently in the work.
From Robust Adversarial Examples
We’ve created images that reliably fool neural network classifiers when viewed from varied scales and perspectives. This challenges a claim from last week that self-driving cars would be hard to trick maliciously since they capture images from multiple scales, angles, perspectives, and the like.
This innocuous kitten photo, printed on a standard color printer, fools the classifier into thinking it’s a monitor or desktop computer regardless of how its zoomed or rotated. We expect further parameter tuning would also remove any human-visible artifacts.
Watch the videos.
From WaveOptics raises $15.5 million for augmented reality displays | VentureBeat
While a number of major manufacturers are building the full AR systems (including the optics, sensors, camera, and head-mounted unit), WaveOptics is focused on developing the underlying optics to deliver an enhanced AR experience.
The core of the WaveOptics technology is a waveguide that is able to channel light input from a micro display positioned at the periphery of a lens made of glass — or in the future, plastic. Unlike conventional technologies that rely on cumbersome prisms, mirrors, or scarce materials, WaveOptics’ optical design harnesses waveguide hologram physics and photonic crystals, which enable lightweight design with good optical performance, the company said.
From Elon Musk says we need to regulate AI before it becomes a danger to humanity – The Verge
“I have exposure to the very cutting edge AI, and I think people should be really concerned about it,” Musk told attendees at the National Governors Association Summer Meeting on Saturday. “I keep sounding the alarm bell, but until people see robots going down the street killing people, they don’t know how to react, because it seems so ethereal.”
The solution, says Musk, is regulation: “AI is a rare case where we need to be proactive about regulation instead of reactive. Because I think by the time we are reactive in AI regulation, it’s too late.” He added that what he sees as the current model of regulation, in which governments step in only after “a whole bunch of bad things happen,” is inadequate for AI because the technology represents “a fundamental risk to the existence of civilization.”
He doesn’t hold words anymore. He must have seen something that truly terrified him.
The full video is here: https://www.youtube.com/watch?v=2C-A797y8dA
From Will HoloLens turn air travelers into mixed-reality characters? – GeekWire
Imagine a world where headset-wearing flight attendants can instantly know how you’re feeling based on a computer analysis of your facial expression.
Actually, you don’t need to imagine: That world is already in beta, thanks to Air New Zealand, Dimension Data and Microsoft HoloLens.
In May, the airline announced that it was testing HoloLens’ mixed-reality system as a tool for keeping track of passengers’ preferences in flight – for example, their favorite drink and preferred menu items. And if the imaging system picked up the telltale furrowed brow of an anxious flier, that could be noted in an annotation displayed to the flight attendant through the headset.
Google already failed at this. The only places where AR glasses would be socially accepted are those ones where personnel with equipment is the norm.
It would take years, if not decades, for people to accept the idea that flight attendants must have a special equipment to serve drinks.
From Your body is a big battery and scientists want to power gadgets with it – The Verge
There are many ways self-powered devices can work. One is piezoelectric energy, which is generated when you apply pressure to certain materials. Another method, more common in the public imagination, is harvesting movement. But while movement seems obvious, it’s not practical to have a device that only works when you’re in motion. So, for many researchers, the best source of energy is body heat, or thermoelectric generation.
Thermoelectric generation works because our bodies are almost always a different temperature from the air outside. Thermoelectric generators pick up on the temperature difference and then use that to create energy, says Daryoosh Vashaee, an electrical engineer at North Carolina State University. Last year, his team built a tiny device that did just that. It’s a metallic tab that can be embedded in a shirt or worn on an armband.
From Augmentation of Brain Function: Facts, Fiction and Controversy | Frontiers Research Topic
Augmentation of brain function is no longer just a theme of science fiction. Due to advances in neural sciences, it has become a matter of reality that a person may consider at some point in life, for example as a treatment of a neurodegenerative disease. Currently, several approaches offer enhancements for sensory, motor and cognitive brain functions, as well as for mood and emotions. Such enhancements may be achieved pharmacologically, using brain implants for recordings, stimulation and drug delivery, by employing brain-machine interfaces, or even by ablation of certain brain areas.
I plan to review all of them.
From Are You Living in a Simulation?
This paper argues that at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching a “posthuman” stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation. It follows that the belief that there is a significant chance that we will one day become posthumans who run ancestor-simulations is false, unless we are currently living in a simulation. A number of other consequences of this result are also discussed.
From This DNA-mimicking protein can make gene editing more precise and safe – The Verge
Even though gene-editing tools like CRISPR-Cas9 are very precise, they sometimes snip pieces of DNA they weren’t programmed to cut. These off-target cuts can be dangerous, and scientists have been trying to find ways to prevent them.
The researchers found that the protein AcrIIA4 mimics DNA so that it can bind to the Cas9 enzyme, blocking it from attaching to actual DNA and cutting it.
Finally, the researchers added AcrIIA4 a few hours after adding the Cas9; that prevented CRISPR from cutting DNA at the wrong sites, while still allowing time for cutting at the right sites.