By monitoring brain activity, the system can detect in real-time if a person notices an error as a robot does a task. Using an interface that measures muscle activity, the person can then make hand gestures to scroll through and select the correct option for the robot to execute.
For the project the team used “Baxter,” a humanoid robot from Rethink Robotics. With human supervision, the robot went from choosing the correct target 70 percent of the time to more than 97 percent of the time.
To create the system the team harnessed the power of electroencephalography (EEG) for brain activity and electromyography (EMG) for muscle activity, putting a series of electrodes on the users’ scalp and forearm.
Both metrics have some individual shortcomings: EEG signals are not always reliably detectable, while EMG signals can sometimes be difficult to map to motions that are any more specific than “move left or right.” Merging the two, however, allows for more robust bio-sensing and makes it possible for the system to work on new users without training.
Sleep deprivation costs the US economy $411 billion a year. It’s bad for your health and generally turns you into a cranky piece of garbage no one want to be around. So, naturally, Bose wants to be in the sleep business. Tomorrow, the company launches SleepBuds, its first foray into helping people fall and stay asleep.
There’s no active noise cancelling on-board, unlike Bose’s better known efforts. Instead, the on-board soundscapes (things like leaves rustling and trickling waterfalls) are designed to essentially drown out noise.
The Sleepbuds never blocked the sound altogether. Instead it was more of a mix of sounds with the strange effect of hearing someone snoring off in the distance in a wind-swept field. You can always adjust the sound levels on the app, but you don’t want to make things too loud, for obvious reasons.
Interestingly, the company opted not to offer streaming here, instead storing files locally. There are ten preloaded sounds, with the option of adding more. This was primarily done for battery reasons. You should get about 16 hours on a charge, with 16 additional hours via the charging case.
CTRL-Labs’ work is built on a technology known as differential electromyography, or EMG. The band’s inside is lined with electrodes, and while they’re touching my skin, they measure electrical pulses along the neurons in my arm. These superlong cells are transmitting orders from my brain to my muscles, so they’re signaling my intentions before I’ve moved or even when I don’t move at all.
EMG is widely used to measure muscle performance, and it’s a promising option for prosthetic limb control. CTRL-Labs isn’t the first company to imagine an EMG-based interface, either. Canadian startup Thalmic Labs sells an EMG gesture-reading armband called the Myo, which detects muscle movements and can handle anything from controlling a computer to translating sign language. (CTRL-Labs used Myo armbands in early prototyping, before designing its own hardware.)
One issue is interference from what Bouton refers to as motion artifacts. The bands have to process extraneous data from accidental hand movements, external vibrations, and the electrodes shifting around the skin. “All those things can cause extra signal you don’t want,” he says. An electrode headset, he notes, would face similar problems — but they’re serious issues for either system.
Reardon says CTRL-Labs’ band can pick out far more precise neural activity than the Myo, which Thalmic bills as a muscle-reading system rather than a brain-computer interface. And the band is supposed to work consistently anywhere on the wrist or lower arm, as long as it’s fitted snugly. (The prototype felt like wearing a thick, metallic elastic bracelet.) But Bouton, who uses EMG to find and activate muscles of people with paralysis, says users would get the best results from hitting exactly the same spot every time — which the average person might find difficult. “Even just moving a few millimeters can make a difference,” he says
Long, fascinating profile of CTRL-Labs. I saw them presenting in NYC at the O’Reilly AI Conference, when they announced the availability of their wristband within this years.
The San Francisco startup is developing an optical imaging system—sufficiently compact to fit inside a skull cap, wand, or bandage—that scatters and captures near-infrared light inside our bodies to create holograms that reveal our occluded selves. The devices could diagnose cancers as well as cardiovascular or other diseases. But because the wavelength of near-infrared light is smaller than a micron, smaller than the smallest neuron, Jepsen believes the resolution of the technology is fine enough to make thoughts visible to.
the company’s promise depended on combining these elements: proof of the entire body’s translucence; holographic techniques, some dating to the 1960s; and Asian silicon manufacturing, which can make new chip architectures into commercial products. Openwater may be less than two years old, but Jepsen has been thinking about a holographic scanner for decades. She is uniquely suited to the challenge. Her early research was in holography; she led display development at Intel, Google X, and Facebook Oculus; and she has shipped billions of dollars of chips.
The idea derives from Jack Gallant, a cognitive neuroscientist at UC Berkeley, who decoded movies shown to subjects in a functional MRI machine by scanning the oxygenated blood in their brains. The images Gallant recovered are blurry, because the resolution of fMRI is comparatively coarse. Holography would not only see blood better but capture the electrochemical pulses of the neurons themselves.
MIT researchers have developed a computer interface that can transcribe words that the user verbalizes internally but does not actually speak aloud.
The system consists of a wearable device and an associated computing system. Electrodes in the device pick up neuromuscular signals in the jaw and face that are triggered by internal verbalizations — saying words “in your head” — but are undetectable to the human eye. The signals are fed to a machine-learning system that has been trained to correlate particular signals with particular words.
The device also includes a pair of bone-conduction headphones, which transmit vibrations through the bones of the face to the inner ear. Because they don’t obstruct the ear canal, the headphones enable the system to convey information to the user without interrupting conversation or otherwise interfering with the user’s auditory experience.
Using the prototype wearable interface, the researchers conducted a usability study in which 10 subjects spent about 15 minutes each customizing the arithmetic application to their own neurophysiology, then spent another 90 minutes using it to execute computations. In that study, the system had an average transcription accuracy of about 92 percent.
But, Kapur says, the system’s performance should improve with more training data, which could be collected during its ordinary use. Although he hasn’t crunched the numbers, he estimates that the better-trained system he uses for demonstrations has an accuracy rate higher than that reported in the usability study.
Sci-fi movies shaped the collective imaginary about neural interfaces as some sort of hardware port or dongle sticking out of the neck and connecting the human brain to the Internet. But that approach, assuming it’s even possible, is still far away into the future.
This approach is much more feasible. Imagine if this object, AlterEgo, would become the main computer peripheral, replacing keyboard and mouse.
The question is not just about the accuracy, but also how its speed compared to existing input methods.
Watch the video.
Roam’s founder and CEO is Tim Swift, a longtime veteran of Ekso Bionics, one of the world’s leaders in exoskeletons. Swift loved what Ekso was building, but balked at the hefty price tag that came with systems designed to help the disabled walk. Building devices that aren’t accessible to the masses didn’t make sense to him anymore. So he struck out on his own, aiming to democratize exoskeletons.
Roam is using plastics and fabrics, and air for transmission. The company’s core insight, Swift says, is a unique fabric actuator that’s very lightweight, yet strong for its volume and weight. The system relies on valves and a backpack power pack to provide torque to the legs. It also has a machine learning element that’s meant to understand how you ski, and anticipate when you’re going to make a turn in order to deliver the extra torque just when you want it.
When ready for market, the skiing exoskeleton is expected to weigh under 10 pounds, including about four or five pounds of equipment that goes in the backpack.
The company claims the exoskeleton will make older skiers feel years younger and able to stay out on the slope for longer. And for athletes, the device will supposedly help them train for days in a row with less fatigue.
So far the company has only built prototypes, but it’s in the process of finalizing a commercial product, set for release in January 2019. Interested skiers can pay $99 to reserve a unit, although the final price is expected to be somewhere between $2,000 and $2,500.
Exoskeletons have a few of clear use cases: people with disabilities, heavy lifting workers, and supersoldiers. Athletes and healthy people that want to enjoy sports in their later years are interesting new possibilities.
The Oton Glass are glasses with two tiny cameras and an earphone on the sides. Half of the lens is a mirror that reflects the user’s eye so that the inner-facing camera can track eye movements and blinks.Image: Oton GlassUsers will look at some text and blink to capture a photo of what’s in front of them, which gets transmitted to a dedicated Raspberry Pi cloud system, analyzed for text, and then converted into a voice that plays through the earpiece. If the system is unable to read those words, a remote worker would be available to troubleshoot.
The Oton was most recently a third-place runner-up for the James Dyson award in 2016:
There exist similar products in the world, but they are not currently commercialized yet. They require a breakthrough of technology and trial-and-error on how to deploy smart glasses. The originality of OTON GLASS consists of two aspects, technology and deployment. First, in the technology realm, startups such as Orcam Inc. and Hours Technology Inc. are currently developing smart glasses for blind people. They mainly develop powerful OCR for the English (Alphabet) using machine learning techniques. On the other hand, OTON GLASS focuses on Japanese character recognition as its unique aspect. OTON GLASS aims to solve the user’s problems by becoming a hybrid (human-to-computer) recognizer and not approaching the problem using OCR Technology. Secondly, in terms deployment, OTON GLASS is all in one that combines camera-to-glasses – meaning they look like normal glasses. This capture trigger based on human’s behavior is natural interaction for people.
researchers have created an electronic skin that can be completely recycled. The e-skin can also heal itself if it’s torn apart.The device, described today in the journal Science Advances, is basically a thin film equipped with sensors that can measure pressure, temperature, humidity, and air flow. The film is made of three commercially available compounds mixed together in a matrix and laced with silver nanoparticles: when the e-skin is cut in two, adding the three compounds to the “wound” allows the e-skin to heal itself by recreating chemical bonds between the two sides. That way, the matrix is restored and the e-skin is as good as new. If the e-skin is broken beyond repair, it can just be soaked in a solution that “liquefies” it so that the materials can be reused to make new e-skin. One day, this electronic skin could be used in prosthetics, robots, or smart textiles.
Pixie Pads will help incontinent adults, including Alzheimer’s and other dementia sufferers, for whom behavioral symptoms of UTI are often confused with progression of dementia. Patients suffering the effects of stroke, spinal cord injury, or developmental disabilities, and men recovering from radical prostatectomy will also benefit from continuous monitoring enabled by Pixie Pads.
Disposable Pixie Pads contain an indicator panel that is scanned by a caregiver using the mobile Pixie App at changing time. The app stores urinalysis data in a secure online service for review and long-term monitoring. It issues an alert to a professional caregiver if there are signs of an infection that require further attention.
This was happening in mid 2016. One year later, Pixie Scientific got the FDA approval to sell in the US as well and started shipping the pads.
Notice that the company initially targeted a completely different market, newborn children, but I guess it wasn’t received too well. While monitoring the body can help diagnose and cure illnesses early on, it’s a big cultural shift from the state of “blindness” we are used to. Too much monitoring can create a state of anxiety and hyper-reaction to any exception to the baseline, not just legit symptoms.
The vest that Paul Collins has been wearing at Ford is made by Ekso Bionics, a Richmond, California-based company. It’s an electronic-free contraption, and the soft part that hugs his chest looks like the front of a backpack. But the back of it has a metal rod for a spine, and a small, curved pillow rests behind his neck. Extending from the spine are spring-loaded arm mechanics, ones that help Collins lift his arms to install carbon cans on Ford C-Max cars, and rubber grommets on Ford Focuses — about 70 cars an hour.
since 2011, Ford has been working, in some capacity, on wearable robotics solutions. But rather than trying to develop something that would give workers superhuman strength, the idea is to prevent injury. “In 2016, our injury statistics were the lowest we’ve seen on record. We’ve had an 83 percent decrease in some of these metrics over the past five years, which is all great,” Smets said. “But if you look at the body parts that are still getting injured, it’s predominantly the shoulder. That’s our number one joint for injury. It’s also the longest to return to full functionality, and the most costly.”
The Ekso vest I tried costs around $6,500 and weighs nine pounds. Smets handed me a power tool, flipped a physical switch on the arm of the vest, and told me to raise my arms over my head as though I was on an assembly line. At some point during my movement, the exosuit kicked into action, its spring mechanism lifting my arms the rest of the way. I could leave my arms in place above my head, too, fully supported. My fingers started to tingle after awhile in that position.
Watch the video.
Bacteria are able to do everything from breaking down toxins to synthesizing vitamins. When they move, they create strands of a material called cellulose that is useful for wound patches and other medical applications. Until now, bacterial cellulose could only be grown on a flat surface — and few parts of our body are perfectly flat. In a paper published today in Science Advances, researchers created a special ink that contains these living bacteria. Because it is an ink, it can be used to 3D print in shapes — including a T-shirt, a face, and circles — and not just flat sheets.
Bacterial cellulose is free of debris, holds a lot of water, and has a soothing effect once it’s applied on wounds. Because it’s a natural material, our body is unlikely to reject it, so it has many potential applications for creating skin transplants, biosensors, or tissue envelopes to carry and protect organs before transplanting them.
The amount of research on skin synthesis and augmentation is surprising. H+ is capturing a lot of articles about it.
The current speed record for typing via brain-computer interface is eight words per minute, but that uses an invasive implant to read signals from a person’s brain. “We’re working to beat that record, even though we’re using a noninvasive technology,” explains Alcaide. “We’re getting about one letter per second, which is still fairly slow, because it’s an early build. We think that in the next year we can further push that forward.”
He says that by introducing AI into the system, Neurable should be able to reduce the delay between letters and also predict what a user is trying to type.
This would have applications well beyond VR.
Today’s brain-computer interfaces involve electrodes or chips that are placed in or on the brain and communicate with an external computer. These electrodes collect brain signals and then send them to the computer, where special software analyzes them and translates them into commands. These commands are relayed to a machine, like a robotic arm, that carries out the desired action.
The embedded chips, which are about the size of a pea, attach to so-called pedestals that sit on top of the patient’s head and connect to a computer via a cable. The robotic limb also attaches to the computer. This clunky set-up means patients can’t yet use these interfaces in their homes.
In order to get there, Schwartz said, researchers need to size down the computer so it’s portable, build a robotic arm that can attach to a wheelchair, and make the entire interface wireless so that the heavy pedestals can be removed from a person’s head.
The above quote is interesting, especially because the research is ready to be tested but there’s no funding. However, the real value is in the video embedded in the page, where Andrew Schwartz, distinguished professor of neurobiology at the University of Pittsburgh, explains what’s the research frontier for neural interfaces.
At $4,995, the system is not cheap, but it is optimized to present complex workloads and process a lot of data right on the glasses themselves.
The Daqri is powered by a Visual Operating System (VOS) and weighs 0.7 pounds. The glasses have a 44-degree field of view and use an Intel Core m7 processor running at 3.1 gigahertz. They run at 90 frames per second and have a resolution of 1360 x 768. They also connect via Bluetooth or Wi-Fi and have sensors such as a wide-angle tracking camera, a depth-sensing camera, and an HD color camera for taking photos and videos.
Olympus just presented a competing product for $1500.
The El-10 can be mounted on all sorts of glasses, from regular to the protective working kind. It has a tiny 640 x 400 OLED display that, much like Google Glass, sits semi-transparently in the corner of your vision when you wear the product on your face. A small forward-facing camera can capture photos and videos, or even beam footage back to a supervisor in real time. The El-10 runs Android 4.2.2 Jelly Bean and comes with only a bare-bones operating system, as Olympus is pushing the ability to customize it
It’s really cool that it can be mounted on any pair of glasses. Olympus provides clips of various sizes to adjust to multiple frames. It weights 66g.
The manual mentions multiple built-in apps: image and video players, a camera (1280x720px), a video recorder (20fps, up to 30min recording), and the QR scanner. It connects to other things via Bluetooth or wireless network.
You can download the Software Development Kit here.
It includes a Windows program to develop new apps, an Android USB driver, an Android app to generate QR codes, and a couple of sample apps.
The Guardian GT looks immense, but its real selling points is its dexterity. Two sensitive controllers are used to guide the huge robot arms, which follow the operators’ motions precisely. To get a closer look at the action, video feed from a camera mounted on top of the Guardian GT is sent to a headset worn by the operator. And the controllers also include force feedback, so the controller gets an idea of how much weight the robot is moving. Each arm can pick up 500 lbs independently.
The Guardian GT’s control system allows it to take on delicate tasks, like pushing buttons and flipping switches. The video feed also means it can be used remotely. Combined, these attributes make the robot perfectly suited for dangerous jobs like cleaning out nuclear power plants. An onboard power source also means it can be operated without a tether, roaming independently for hours a time.
Sarcos is building a truly impressive series of robotic exoskeleton suits, not just the GT. You should also look at the Guardian XO on their website where there are better videos of all products than the one embedded in the above article.
Sarcos says that their technology is the future of heavy industry in a wide range of scenarios:
- nuclear reactor inspection and maintenance
- heavy equipment manufacturing
- palletizing and de-palletizing
- loading and unloading supplies
- shipboard and in-field logistics
- erecting temporary shelters
- equipment repairs
- medical evacuation
- moving rocks and debris in humanitarian missions
but I think this is just the beginning. Thanks to technological progress, their exoskeletons could become thinner and thinner, lighter and lighter, and be used in other fields too (including war combat).
They are even attempting to establish a robot-as-a-service model.
As I observe the emergence of smart clothing in multiple categories (from smart socks to smart jackets), I am trying to imagine the implications for the buyer as more and more pieces of his/her wardrobe blend with technology.
Today smart clothing is mainly perceived as a nice-to-have by tech enthusiasts (both men and women), and as a gimmick by the larger mainstream audience. In the future, as the technology matures and starts providing significant benefits, smart clothing might become preferred rather than optional. What happens at that point?
Will the buyer continue to mix and match smart clothing pieces from different fashion brands as he/she does today with traditional clothing? Will he /she accept to deal with each app that comes with each garment? Socks, jackets, bras, gloves, pants, etc. Or there will be a company that centralizes the ecosystem around its technology hub, in the same way Apple is centralizing the smart home ecosystem around its HomeKit? Just one app to monitor all garments and understand our health status, mood, performance.
Apple’s Angela Ahrendts comes from Burberry. At the time, the consensus was that she was hired to drive the sales of upper scale products like the premium Apple Watch Edition. Maybe there’s a longer-term reason?
What if technology becomes a primary driver for fashion purchasing decisions and such centralizing company doesn’t emerge to save customers?
What if the buyer really cares about the technology benefits of smart clothing but doesn’t like the style or the colour of the few brands that offer the specific garment he/she wants?
I think that eventually some fashion brands will have to embrace smart clothing end to end, offering an entire collection of smart clothes. Not just to differentiate. But to retain the customer loyalty, in the same way most collections today include all the trendiest pieces. And at that point, controlling a whole collection of smart clothes will be an opportunity to innovate, to make customers feel better about their inner self, not just their external appearance.
In the IT industry, today we say that every company is becoming a tech company. Tomorrow it might well be that every fashion brand becomes a tech brand.
Founded in 2011 by Vigano and his former Microsoft colleagues, Sensoria has developed an array of “smart” garments that can track your movements and measure how well you’re walking or running. The company offers an artificial intelligence-powered real-time personal trainer; it partnered with Microsoft last year to develop “smart soccer boots”; and it also partnered with Renault last year to build a smart racing suit for professional racecar drivers.
I recently met at the Microsoft Ignite conference in Orlando an old friend of mine working at this company. He showed me the smart sock. Here’s how it works:
- 1. Each smart sock is infused with three proprietary textile sensors under the plantar area (bottom of the foot) to detect foot pressure.
- 2. The conductive fibers relay data collected by the sensors to the anklet. The sock has been designed to function as a textile circuit board.
- 3. Each sock features magnetic contact points below the cuff so you can easily connect your anklet to activate the textile sensors.
When I saw the product in person, I selfishly suggested a smart elbow brace for tennis players as I play squash.
There are a lot of applications for smart textiles beyond socks for sport, and in fact, the company is entering the healthcare market too, but ever since meeting my friend, I wondered about the future of sports.
Today athletes are forbidden from augmenting their bodies through chemicals. But what if tomorrow the appeal of sport becomes how much technology can push a human body?
sensors that can measure strain, and thus bodily motions, in real time have become a hot commodity. But figuring out an optical sensor that can stand up to large strains, such as those across a bent elbow or a clenched fist, has proved a tough problem to crack.
A team of researchers at Tsinghua University, China, led by OSA member Changxi Yang, now believes it’s come up with an answer: a fiber optic sensor made of a silicone polymer that can stand up to, and detect, elongations as great as 100 percent—and effortlessly snap back to an unstrained state for repeated use
fibers made of polydimethylsiloxane (PDMS), a soft, stretchable silicone elastomer that’s become a common substrate in stretchable electronics. The team developed the fiber by cooking up a liquid silicone solution in tube-shaped molds at 80 °C, and doping the fiber mix with Rhodamine B dye molecules, whose light absorption is wavelength dependent. Because stretching of the fiber will shrink its diameter, leaving the total volume invariant, a fiber extension has the effect of increasing the optical length for light passing through the dye-doped fiber. That increase, in turn, can be read in the attenuation of the fiber’s transmission spectra, and tied to the amount of strain in the fiber.
This could lead to a new generation of “smart clothing”, especially for sport and medical applications.
In the past, eye-tracking technology has had a bad press. “Using eye blink or dwell for cockpit control selection led to the so called ‘Midas touch’ phenomenon, where people could inadvertently switch things on or off just by looking,” says Ms Page. But combine a gaze with a second control and the possibilities are vast. “Consider the mouse, a genius piece of technology. Three buttons but great versatility.” Pilots, she says, could activate drop-down menus with their gaze, and confirm their command with the click of a button at their fingers.
In future, eye-tracking might be used to assess a pilot’s physiological state. “There’s evidence parameters about the eye can tell us about an individual’s cognitive workload,” says Ms Page.
Eye-tracking technology could also monitor how quickly a pilot is learning the ropes, allowing training to be better tailored. “Instead of delivering a blanket 40 hours to everyone, for instance, you could cut training for those whose eye data suggest they are monitoring the correct information and have an acceptable workload level, and allow longer for those who need it.”
- Obviously, human augmentation is initially focusing on vision, but that’s just the beginning. Our brain seems to be capable of processing any input, extract a meaningful pattern out of it, and use to improve our understanding of the world. I expect the auditory system to be the next AR focus. I’d assume augmented earing would be especially useful in ground combat.
- We are visual creatures so we are naturally inclined to assume that the large portion of our neocortex dedicated to image processing will be able to deal with even more data coming in. What if it’s a wrong assumption?
Chronic wounds do not heal in an orderly fashion in part due to the lack of timely release of biological factors essential for healing. Topical administration of various therapeutic factors at different stages is shown to enhance the healing rate of chronic wounds. Developing a wound dressing that can deliver biomolecules with a predetermined spatial and temporal pattern would be beneficial for effective treatment of chronic wounds. Here, an actively controlled wound dressing is fabricated using composite fibers with a core electrical heater covered by a layer of hydrogel containing thermoresponsive drug carriers. The fibers are loaded with different drugs and biological factors and are then assembled using textile processes to create a flexible and wearable wound dressing. These fibers can be individually addressed to enable on-demand release of different drugs with a controlled temporal profile. Here, the effectiveness of the engineered dressing for on-demand release of antibiotics and vascular endothelial growth factor (VEGF) is demonstrated for eliminating bacterial infection and inducing angiogenesis in vitro. The effectiveness of the VEGF release on improving healing rate is also demonstrated in a murine model of diabetic wounds.
Instead of plain sterile cotton or other fibers, this dressing is made of “composite fibers with a core electrical heater covered by a layer of hydrogel containing thermoresponsive drug carriers,” which really says it all.
It acts as a regular bandage, protecting the injury from exposure and so on, but attached to it is a stamp-sized microcontroller. When prompted by an app (or an onboard timer, or conceivably sensors woven into the bandage), the microcontroller sends a voltage through certain of the fibers, warming them and activating the medications lying dormant in the hydrogel.
Those medications could be anything from topical anesthetics to antibiotics to more sophisticated things like growth hormones that accelerate healing. More voltage, more medication — and each fiber can carry a different one.
In summer 2016, we met to build a low-cost brain-computer interface that you could plug into your phone. We want everyone interested in BCI technology to be able to try it out.
Two months later, we premiered the world’s first £20 BCI at EMF camp as ‘smartphone-BCI’.
As of summer 2017, we have:
- a simple, two electrode EEG kit that amplifies neural signals, and modulates them for input to an audio jack;
- a basic Android diagnostic app;
- an SSVEP Unity text entry app.
The v0.1 circuit reads a bipolar EEG signal and sends the signal out along an audio cable, for use in a smartphone, tablet, laptop, etc.
EEG signals are difficult to work with as they are very faint, and easily interfered with by other signals, including muscle movements and mains electricity – both of which are much more powerful. Also, the interesting frequencies range between 4Hz to 32Hz (depending on the intended use), but a smartphone sound card will filter out all signals below 20Hz.
Thus, the v0.1 circuit:
- amplifies the signals that comes from the electrodes, boosting them from the microvolt to the millivolt range;
- uses amplitude modulation to add a 1kHz carrier tone, allowing the signal to bypass the 20Hz high-pass filter behind the phone’s audio jack.
As humans, we can perceive less than a ten-trillionth of all light waves. “Our experience of reality,” says neuroscientist David Eagleman, “is constrained by our biology.” He wants to change that. His research into our brain processes has led him to create new interfaces — such as a sensory vest — to take in previously unseen information about the world around us.
A truly radical idea. Mindblowing.
Vizor is a sort of eyewear with clear glasses. But it can also project your patient’s spine in 3D so that you can locate your tools in real time even if it’s below the skin. It has multiple sensors to detect your head movements as well.
Hospitals first have to segment the spine from the rest of the scan, such as soft tissue. They already have all the tools they need to do it themselves.
Then, doctors have to place markers on the patient’s body to register the location of the spine. This way, even if the patient moves while breathing, Vizor can automatically adjust the position of the spine in real time.
Surgeons also need to put markers on standard surgical tools. After a calibration process, Vizor can precisely display the orientation of the tools during the operation. According to Augmedics, it takes 10-20 seconds to calibrate the tools. The device also lets you visualize the implants, such as screws.
Elimelech says that the overall system accuracy is about 1.4mm. The FDA requires a level of accuracy below 2mm.
Remarkable, but hard to explain in words. Watch the video.
Amazon’s first wearable device will be a pair of smart glasses with the Alexa voice assistant built in, according to a report in the Financial Times. The device will reportedly look like a regular pair of glasses and use bone-conduction technology so that the user can hear Alexa without the need for earphones or conventional speakers. It won’t, however, likely have a screen or camera, although Google Glass founder Babak Parviz has apparently been working on the project following his hiring by Amazon in 2014
Google failed at this in the same way Microsoft failed at tablets before Apple introduced the iPad. Execution is everything, and maybe glasses that only offer voice user interface is a more manageable first step than featuring augmented vision too.
On the other side, so far, Amazon didn’t shine as a hardware vendor. Their Android phone, a primary vector for Alexa, was a failure. The other devices they sell are OK but not memorable, and not aesthetically pleasing (which becomes important in fashion accessories like glasses).
One final thought: Amazon long-term goal is to have Alexa everywhere, so these glasses will get increasingly cheaper (like Kindle devices do) or Amazon will find a way to apply the same technology to every glass on the market.
The Nuada is a smart glove. It gives back hand strength and coordination by augmenting the motions of your palm and digits. It acts as an electromechanical support system that lets you perform nearly superhuman feats or simply perform day-to-day tasks. The glove contains electronic tendons that can help the hand open and close and even perform basic motions and a sensor tells doctors and users about their pull strength, dexterity, and other metrics.
“We then use our own electromechanical system to support the user in the movement he wants to do,” said Quinaz. “This makes us able to support incredible weights with a small system, that needs much less energy to function. We can build the first mass adopted exoskeleton solutions with our technology.”
the team’s inventions include a biodegradable semi-conductive polymer, disintegrable and flexible electronic circuits, and a biodegradable substrate material for mounting these electrical components onto.
Totally flexible and biocompatible, the ultra-thin film substrate allows the components to be mounted onto both rough and smooth surfaces.
All together, the components can be used to create biocompatible, ultra-thin, lightweight and low-cost electronics for applications as diverse as wearable electronics to large-scale environmental surveys.
Maybe this is one of the many approaches we’ll use for biohacking or as wearable technology in the future.
The human vagus nerve contains around 100,000 individual nerve fibres, which branch out to reach various organs. But the amount of electricity needed to trigger neural activity can vary from fibre to fibre by as much as 50-fold.
Yaakov Levine, a former graduate student of Tracey’s, has worked out that the nerve fibres involved in reducing inflammation have a low activation threshold. They can be turned on with as little as 250-millionths of an amp — one-eighth the amount often used to suppress seizures. And although people treated for seizures require up to several hours of stimulation per day, animal experiments have suggested that a single, brief shock could control inflammation for a long time10. Macrophages hit by acetylcholine are unable to produce TNF-α for up to 24 hours, says Levine, who now works in Manhasset at SetPoint Medical, a company established to commercialize vagus-nerve stimulation as a medical treatment.
By 2011, SetPoint was ready to try the technique in humans, thanks to animal studies and Levine’s optimization efforts. That first trial was overseen by Paul-Peter Tak, a rheumatologist at the University of Amsterdam and at the UK-based pharmaceutical company GlaxoSmithKline. Over the course of several years, 18 people with rheumatoid arthritis have been implanted with stimulators, including Katrin.
For the images of the actual device, check Core77. They also have implantable bioelectronic devices.
Ayoub, who is currently a Ph.D. researcher at Goldsmiths College in London, designed the glove for anyone who relies on sign language to communicate, from deaf people to children who have non-verbal autism and communicate through gestures. To use it, you simply put the glove on and start signing. The glove translates the signs in real time into sentences that appear on a small screen on the wrist, which can then be read out loud using a small speaker.
Watch the video.
The Chairless Chair® is a flexible wearable ergonomic sitting support designed by Sapettiand produced by the Swiss-based company noonee.
The main application of the Chairless Chair® is for manufacturing companies, where workers are required to stand for long periods of time and traditional sitting methods are not suitable, leading to obstacles in the work area. While wearing the Chairless Chair, users walk together with sitting support without obstructing the work space. The position also avoids strenuous postures such as bending, squatting or crouching.
I wonder if the device impedes the act of running, in case of emergency.
Bionik Laboratories says it’s the first to add the digital assistant to a powered exoskeleton. The company has integrated Alexa with its lower-body Arke exoskeleton, allowing users to give voice commands like “Alexa, I’m ready to stand” or “Alexa, take a step.”
Movement of the Arke, which is currently in clinical development, is usually controlled by an app on a tablet or by reacting automatically to users’ movements. Sensors in the exoskeleton detect when the wearer shifts their weight, activating the motors in the backpack that help the individual move. For Bionik, adding Alexa can help individuals going through rehabilitation get familiar with these actions.
Voice-controlled exoskeleton is an interesting way to overcome the complexity of creating sophisticated brain-machine interfaces, but current technology has a lot of limitations. For example, Alexa doesn’t have yet voice fingerprinting, so anybody in the room could, maliciously or not, utter a command on behalf of the user and harm that person with an undesired exoskeleton movement at the wrong time.
Nonetheless, these are valuable baby steps. If you are interested in Bionik Laboratories, you can see a lot more in their on-stage presentation at IBM Insight conference in 2015.
Did you know that the wheelchair was invented 1500 years ago?
When you speak in English, there’s a short delay and then your interlocutor hears it in Mandarin Chinese (or whatever other languages are added later). They respond in Chinese, and you hear it in English — it’s really that simple.
The main issue I had was with the latency, which left Wells and I staring at each other silently for a three count while the app did its work. But the version I used wasn’t optimized for latency, and the team is hard at work reducing it.
“We’re trying to shorten the latency to 1-3 seconds, which needs lots of work in optimization of the whole process of data transmission between the earphones, app and server,”
researchers in China have developed a new type of user-interactive electronic skin, with a colour change perceptible to the human eye, and achieved with a much-reduced level of strain. Their results could have applications in robotics, prosthetics and wearable technology.
…the study from Tsinghua University in Beijing, employed flexible electronics made from graphene, in the form of a highly-sensitive resistive strain sensor, combined with a stretchable organic electrochromic device.
you wear the ring on your index finger, and when it vibrates with an incoming call, simply lift your hand up, touch your fingertip on a sweet spot just before your ear, then chat away. An earlier crowdfunding project, the Sgnl smart strap (formerly TipTalk) by Korea’s Innomdle Lab, had the same idea, but it has yet to ship to backers long after its February target date this year.
The Orii is essentially an aluminum ring melded to a small package containing all the electronics. The main body on the latest working prototype came in at roughly 30 mm long, 20 mm wide and 12 mm thick.
Thin-film electronic devices can be integrated with skin for health monitoring and/or for interfacing with machines. Minimal invasiveness is highly desirable when applying wearable electronics directly onto human skin. However, manufacturing such on-skin electronics on planar substrates results in limited gas permeability. Therefore, it is necessary to systematically investigate their long-term physiological and psychological effects.
As a demonstration of substrate-free electronics, here we show the successful fabrication of inflammation-free, highly gas-permeable, ultrathin, lightweight and stretchable sensors that can be directly laminated onto human skin for long periods of time, realized with a conductive nanomesh structure. A one-week skin patch test revealed that the risk of inflammation caused by on-skin sensors can be significantly suppressed by using the nanomesh sensors.
Furthermore, a wireless system that can detect touch, temperature and pressure is successfully demonstrated using a nanomesh with excellent mechanical durability. In addition, electromyogram recordings were successfully taken with minimal discomfort to the user.
The version on display is still a prototype, but all of the functionality is in place, using a motorized pulley system to bring mobility to legs impacted by stroke.
The device, now known as the Restore soft-suit, relies on a motor built into a waistband that controls a pair of cables that operate similarly to bicycle brakes, lifting a footplate in the shoe and moving the whole leg in the process. The unaffected leg, meanwhile, has sensors that measure the wearer’s gait while walking, syncing up the two legs’ movement.
The idea of smart contact lenses isn’t as far away as you might think. The first problem that crops up is how exactly do we power the electronics in a set of “smart” contact lenses. As it turns out, we can use the energy of motion or kinetic energy. Every time the eye blinks, we get some power. Now that we have the power problem solved, there are at least several applications we can think of in order of easiest first:
- Level 1 – Multifocal contact lenses like these from Visioneering Technologies, Inc. (VTI) or curing color blindness like these smart contact lenses called Colormax
- Level 2 – Gathering information from your body – like glucose monitoring for diabetics
- Level 3 – Augmenting your vision with digital overlay
- Level 4 – Complete virtual reality (not sure if this is possible based on the eye symmetry but we can dream a dream)
So when we ask the question “how far away are we from having smart contact lenses” the answer isn’t that simple. The first level we have already achieved.
scientists have created a super-thin wearable that can record data through skin. That would make this wearable, which looks like a stylish gold tattoo, ideal for long-term medical monitoring — it’s already so comfortable that people forgot they were wearing it.
Most skin-based interfaces consist of electronics embedded in a substance, like plastic, that is then stuck onto the skin. Problem is, the plastic is often rigid or it doesn’t let you move and sweat. In a paper published today in the journal Nature Nanotechnology, scientists used a material that dissolves under water, leaving the electronic part directly on the skin and comfortable to bend and wear.
Companies testing EE—including giants like GE, Boeing, DHL, and Volkswagen—have measured huge gains in productivity and noticeable improvements in quality. What started as pilot projects are now morphing into plans for widespread adoption in these corporations. Other businesses, like medical practices, are introducing Enterprise Edition in their workplaces to transform previously cumbersome tasks.
For starters, it makes the technology completely accessible for those who wear prescription lenses. The camera button, which sits at the hinge of the frame, does double duty as a release switch to remove the electronics part of unit (called the Glass Pod) from the frame. You can then connect it to safety glasses for the factory floor—EE now offers OSHA-certified safety shields—or frames that look like regular eyewear. (A former division of 3M has been manufacturing these specially for Enterprise Edition; if EE catches on, one might expect other frame vendors, from Warby Parker to Ray-Ban, to develop their own versions.)
Other improvements include beefed-up networking—not only faster and more reliable wifi, but also adherence to more rigorous security standards—and a faster processor as well. The battery life has been extended—essential for those who want to work through a complete eight-hour shift without recharging. (More intense usage, like constant streaming, still calls for an external battery.) The camera was upgraded from five megapixels to eight. And for the first time, a red light goes on when video is being recorded.
If Glass EE gains traction, and I believe so if it evolves into a platform for enterprise apps, Google will gain a huge amount of information and experience that can reuse on the AR contact lenses currently in the work.
The heart of the process is Waverly’s app, which both and your friend need to download onto your phones (it’s free on both iOS and Android). Then, once you “sync” your conversation through a matching QR code on the app, you’re off and speaking. Press a button on the app and talk into the earpiece’s microphone to record what you want to say. Your voice is then piped through Waverly’s machine translation software which converts it to text on your friend’s app. If he also has his own earpiece, your friend will hear a translated version of what you said, albeit via a computer voice.
Language barrier issues won’t go away completely for years but will be significantly different.
Near the end of 2017 we’ll be consuming content synthesised to mimic real people. Leaving us in a sea of disinformation powered by AI and machine learning. The media, giant tech corporations and citizens already struggle to discern fact from fiction. And as this technology is democratised it will be even more prevalent.
Preempting this we prototyped a device worn on the ear and connected to a neural net trained on real and synthetic voices called Anti AI AI. The device notifies the wearer when a synthetic voice is detected and cools the skin using a thermoelectric plate to alert the wearer the voice they are hearing was synthesised: by a cold, lifeless machine.