Wearable Technology

Neurable has been working on developing brain-control systems for VR for over a year

From Brain-Controlled Typing May Be the Killer Advance That AR Needs – MIT Technology Review

The current speed record for typing via brain-computer interface is eight words per minute, but that uses an invasive implant to read signals from a person’s brain. “We’re working to beat that record, even though we’re using a noninvasive technology,” explains Alcaide. “We’re getting about one letter per second, which is still fairly slow, because it’s an early build. We think that in the next year we can further push that forward.”

He says that by introducing AI into the system, Neurable should be able to reduce the delay between letters and also predict what a user is trying to type.

This would have applications well beyond VR.

We don’t understand yet the brain coding for force

From For Brain-Computer Interfaces to Be Useful, They’ll Need to Be Wireless – MIT Technology Review

Today’s brain-computer interfaces involve electrodes or chips that are placed in or on the brain and communicate with an external computer. These electrodes collect brain signals and then send them to the computer, where special software analyzes them and translates them into commands. These commands are relayed to a machine, like a robotic arm, that carries out the desired action.

The embedded chips, which are about the size of a pea, attach to so-called pedestals that sit on top of the patient’s head and connect to a computer via a cable. The robotic limb also attaches to the computer. This clunky set-up means patients can’t yet use these interfaces in their homes.

In order to get there, Schwartz said, researchers need to size down the computer so it’s portable, build a robotic arm that can attach to a wheelchair, and make the entire interface wireless so that the heavy pedestals can be removed from a person’s head.

The above quote is interesting, especially because the research is ready to be tested but there’s no funding. However, the real value is in the video embedded in the page, where Andrew Schwartz, distinguished professor of neurobiology at the University of Pittsburgh, explains what’s the research frontier for neural interfaces.

AR glasses competition starts to get real

From Daqri ships augmented reality smart glasses for professionals | VentureBeat

At $4,995, the system is not cheap, but it is optimized to present complex workloads and process a lot of data right on the glasses themselves.

and

The Daqri is powered by a Visual Operating System (VOS) and weighs 0.7 pounds. The glasses have a 44-degree field of view and use an Intel Core m7 processor running at 3.1 gigahertz. They run at 90 frames per second and have a resolution of 1360 x 768. They also connect via Bluetooth or Wi-Fi and have sensors such as a wide-angle tracking camera, a depth-sensing camera, and an HD color camera for taking photos and videos.

Olympus just presented a competing product for $1500.

Olympus EyeTrek is a $1,500 open-source, enterprise-focused smart glasses product

From Olympus made $1,500 open-source smart glasses – The Verge

The El-10 can be mounted on all sorts of glasses, from regular to the protective working kind. It has a tiny 640 x 400 OLED display that, much like Google Glass, sits semi-transparently in the corner of your vision when you wear the product on your face. A small forward-facing camera can capture photos and videos, or even beam footage back to a supervisor in real time. The El-10 runs Android 4.2.2 Jelly Bean and comes with only a bare-bones operating system, as Olympus is pushing the ability to customize it

It’s really cool that it can be mounted on any pair of glasses. Olympus provides clips of various sizes to adjust to multiple frames. It weights 66g.

The manual mentions multiple built-in apps: image and video players, a camera (1280x720px), a video recorder (20fps, up to 30min recording), and the QR scanner. It connects to other things via Bluetooth or wireless network.

You can download the Software Development Kit here.
It includes a Windows program to develop new apps, an Android USB driver, an Android app to generate QR codes, and a couple of sample apps.

Robotic exoskeletons may revolutionize the heavy industry

From Using this robot gives you monstrously powerful mech arms – The Verge

The Guardian GT looks immense, but its real selling points is its dexterity. Two sensitive controllers are used to guide the huge robot arms, which follow the operators’ motions precisely. To get a closer look at the action, video feed from a camera mounted on top of the Guardian GT is sent to a headset worn by the operator. And the controllers also include force feedback, so the controller gets an idea of how much weight the robot is moving. Each arm can pick up 500 lbs independently.

and

The Guardian GT’s control system allows it to take on delicate tasks, like pushing buttons and flipping switches. The video feed also means it can be used remotely. Combined, these attributes make the robot perfectly suited for dangerous jobs like cleaning out nuclear power plants. An onboard power source also means it can be operated without a tether, roaming independently for hours a time.

Sarcos is building a truly impressive series of robotic exoskeleton suits, not just the GT. You should also look at the Guardian XO on their website where there are better videos of all products than the one embedded in the above article.

Sarcos says that their technology is the future of heavy industry in a wide range of scenarios:

  • nuclear reactor inspection and maintenance
  • petroleum
  • construction
  • heavy equipment manufacturing
  • palletizing and de-palletizing
  • loading and unloading supplies
  • shipboard and in-field logistics
  • erecting temporary shelters
  • equipment repairs
  • medical evacuation
  • moving rocks and debris in humanitarian missions

but I think this is just the beginning. Thanks to technological progress, their exoskeletons could become thinner and thinner, lighter and lighter, and be used in other fields too (including war combat).

They are even attempting to establish a robot-as-a-service model.

The fashion industry will have to embrace smart clothing end to end

As I observe the emergence of smart clothing in multiple categories (from smart socks to smart jackets), I am trying to imagine the implications for the buyer as more and more pieces of his/her wardrobe blend with technology.

Today smart clothing is mainly perceived as a nice-to-have by tech enthusiasts (both men and women), and as a gimmick by the larger mainstream audience. In the future, as the technology matures and starts providing significant benefits, smart clothing might become preferred rather than optional. What happens at that point?

Will the buyer continue to mix and match smart clothing pieces from different fashion brands as he/she does today with traditional clothing? Will he /she accept to deal with each app that comes with each garment? Socks, jackets, bras, gloves, pants, etc. Or there will be a company that centralizes the ecosystem around its technology hub, in the same way Apple is centralizing the smart home ecosystem around its HomeKit? Just one app to monitor all garments and understand our health status, mood, performance.

Apple’s Angela Ahrendts comes from Burberry. At the time, the consensus was that she was hired to drive the sales of upper scale products like the premium Apple Watch Edition. Maybe there’s a longer-term reason?

What if technology becomes a primary driver for fashion purchasing decisions and such centralizing company doesn’t emerge to save customers?
What if the buyer really cares about the technology benefits of smart clothing but doesn’t like the style or the colour of the few brands that offer the specific garment he/she wants?

I think that eventually some fashion brands will have to embrace smart clothing end to end, offering an entire collection of smart clothes. Not just to differentiate. But to retain the customer loyalty, in the same way most collections today include all the trendiest pieces. And at that point, controlling a whole collection of smart clothes will be an opportunity to innovate, to make customers feel better about their inner self, not just their external appearance.

In the IT industry, today we say that every company is becoming a tech company. Tomorrow it might well be that every fashion brand becomes a tech brand.

Sensoria smart socks, augmented athletes, and the future of sport

From Smart sports apparel startup Sensoria gets into healthcare, forms new company with top provider – GeekWire

Founded in 2011 by Vigano and his former Microsoft colleagues, Sensoria has developed an array of “smart” garments that can track your movements and measure how well you’re walking or running. The company offers an artificial intelligence-powered real-time personal trainer; it partnered with Microsoft last year to develop “smart soccer boots”; and it also partnered with Renault last year to build a smart racing suit for professional racecar drivers.

I recently met at the Microsoft Ignite conference in Orlando an old friend of mine working at this company. He showed me the smart sock. Here’s how it works:

      1. Each smart sock is infused with three proprietary textile sensors under the plantar area (bottom of the foot) to detect foot pressure.
      2. The conductive fibers relay data collected by the sensors to the anklet. The sock has been designed to function as a textile circuit board.
      3. Each sock features magnetic contact points below the cuff so you can easily connect your anklet to activate the textile sensors.

When I saw the product in person, I selfishly suggested a smart elbow brace for tennis players as I play squash.

There are a lot of applications for smart textiles beyond socks for sport, and in fact, the company is entering the healthcare market too, but ever since meeting my friend, I wondered about the future of sports.

Today athletes are forbidden from augmenting their bodies through chemicals. But what if tomorrow the appeal of sport becomes how much technology can push a human body?

Researchers develop Optical Motion and Strain Sensor for wearable applications

From Optical Strain Sensor for Wearable Tech | Optics & Photonics News

sensors that can measure strain, and thus bodily motions, in real time have become a hot commodity. But figuring out an optical sensor that can stand up to large strains, such as those across a bent elbow or a clenched fist, has proved a tough problem to crack.

A team of researchers at Tsinghua University, China, led by OSA member Changxi Yang, now believes it’s come up with an answer: a fiber optic sensor made of a silicone polymer that can stand up to, and detect, elongations as great as 100 percent—and effortlessly snap back to an unstrained state for repeated use

and

fibers made of polydimethylsiloxane (PDMS), a soft, stretchable silicone elastomer that’s become a common substrate in stretchable electronics. The team developed the fiber by cooking up a liquid silicone solution in tube-shaped molds at 80 °C, and doping the fiber mix with Rhodamine B dye molecules, whose light absorption is wavelength dependent. Because stretching of the fiber will shrink its diameter, leaving the total volume invariant, a fiber extension has the effect of increasing the optical length for light passing through the dye-doped fiber. That increase, in turn, can be read in the attenuation of the fiber’s transmission spectra, and tied to the amount of strain in the fiber.

This could lead to a new generation of “smart clothing”, especially for sport and medical applications.

BAE Systems working on eye tracking for military digital helmets

From How wearable technology is transforming fighter pilots’ roles

In the past, eye-tracking technology has had a bad press. “Using eye blink or dwell for cockpit control selection led to the so called ‘Midas touch’ phenomenon, where people could inadvertently switch things on or off just by looking,” says Ms Page. But combine a gaze with a second control and the possibilities are vast. “Consider the mouse, a genius piece of technology. Three buttons but great versatility.” Pilots, she says, could activate drop-down menus with their gaze, and confirm their command with the click of a button at their fingers.

In future, eye-tracking might be used to assess a pilot’s physiological state. “There’s evidence parameters about the eye can tell us about an individual’s cognitive workload,” says Ms Page.

Eye-tracking technology could also monitor how quickly a pilot is learning the ropes, allowing training to be better tailored. “Instead of delivering a blanket 40 hours to everyone, for instance, you could cut training for those whose eye data suggest they are monitoring the correct information and have an acceptable workload level, and allow longer for those who need it.”

Two thoughts:

  • Obviously, human augmentation is initially focusing on vision, but that’s just the beginning. Our brain seems to be capable of processing any input, extract a meaningful pattern out of it, and use to improve our understanding of the world. I expect the auditory system to be the next AR focus. I’d assume augmented earing would be especially useful in ground combat.
  • We are visual creatures so we are naturally inclined to assume that the large portion of our neocortex dedicated to image processing will be able to deal with even more data coming in. What if it’s a wrong assumption?

A Textile Dressing for Temporal and Dosage Controlled Drug Delivery

From A Textile Dressing for Temporal and Dosage Controlled Drug Delivery – Mostafalu – 2017 – Advanced Functional Materials – Wiley Online Library

Chronic wounds do not heal in an orderly fashion in part due to the lack of timely release of biological factors essential for healing. Topical administration of various therapeutic factors at different stages is shown to enhance the healing rate of chronic wounds. Developing a wound dressing that can deliver biomolecules with a predetermined spatial and temporal pattern would be beneficial for effective treatment of chronic wounds. Here, an actively controlled wound dressing is fabricated using composite fibers with a core electrical heater covered by a layer of hydrogel containing thermoresponsive drug carriers. The fibers are loaded with different drugs and biological factors and are then assembled using textile processes to create a flexible and wearable wound dressing. These fibers can be individually addressed to enable on-demand release of different drugs with a controlled temporal profile. Here, the effectiveness of the engineered dressing for on-demand release of antibiotics and vascular endothelial growth factor (VEGF) is demonstrated for eliminating bacterial infection and inducing angiogenesis in vitro. The effectiveness of the VEGF release on improving healing rate is also demonstrated in a murine model of diabetic wounds.

Universities testing a smart bandage that automatically dispenses medication

From This smart bandage releases meds on command for better healing | TechCrunch

Instead of plain sterile cotton or other fibers, this dressing is made of “composite fibers with a core electrical heater covered by a layer of hydrogel containing thermoresponsive drug carriers,” which really says it all.

It acts as a regular bandage, protecting the injury from exposure and so on, but attached to it is a stamp-sized microcontroller. When prompted by an app (or an onboard timer, or conceivably sensors woven into the bandage), the microcontroller sends a voltage through certain of the fibers, warming them and activating the medications lying dormant in the hydrogel.

Those medications could be anything from topical anesthetics to antibiotics to more sophisticated things like growth hormones that accelerate healing. More voltage, more medication — and each fiber can carry a different one.

A brain-computer interface for under £20

From ici·bici: here’s a low cost, open source brain-computer interface

In summer 2016, we met to build a low-cost brain-computer interface that you could plug into your phone. We want everyone interested in BCI technology to be able to try it out.

Two months later, we premiered the world’s first £20 BCI at EMF camp as ‘smartphone-BCI’.

As of summer 2017, we have:

  • a simple, two electrode EEG kit that amplifies neural signals, and modulates them for input to an audio jack;
  • a basic Android diagnostic app;
  • an SSVEP Unity text entry app.

and

The v0.1 circuit reads a bipolar EEG signal and sends the signal out along an audio cable, for use in a smartphone, tablet, laptop, etc.

EEG signals are difficult to work with as they are very faint, and easily interfered with by other signals, including muscle movements and mains electricity – both of which are much more powerful. Also, the interesting frequencies range between 4Hz to 32Hz (depending on the intended use), but a smartphone sound card will filter out all signals below 20Hz.

Thus, the v0.1 circuit:

  • amplifies the signals that comes from the electrodes, boosting them from the microvolt to the millivolt range;
  • uses amplitude modulation to add a 1kHz carrier tone, allowing the signal to bypass the 20Hz high-pass filter behind the phone’s audio jack.

A vest to expand perception of the world through sensory substitution and sensory addition

From David Eagleman: Can we create new senses for humans? TED.com

As humans, we can perceive less than a ten-trillionth of all light waves. “Our experience of reality,” says neuroscientist David Eagleman, “is constrained by our biology.” He wants to change that. His research into our brain processes has led him to create new interfaces — such as a sensory vest — to take in previously unseen information about the world around us.

A truly radical idea. Mindblowing.

AR glasses for surgical navigation reach 1.4mm accuracy

From Augmedics is building augmented reality glasses for spinal surgery | TechCrunch

Vizor is a sort of eyewear with clear glasses. But it can also project your patient’s spine in 3D so that you can locate your tools in real time even if it’s below the skin. It has multiple sensors to detect your head movements as well.

Hospitals first have to segment the spine from the rest of the scan, such as soft tissue. They already have all the tools they need to do it themselves.

Then, doctors have to place markers on the patient’s body to register the location of the spine. This way, even if the patient moves while breathing, Vizor can automatically adjust the position of the spine in real time.

Surgeons also need to put markers on standard surgical tools. After a calibration process, Vizor can precisely display the orientation of the tools during the operation. According to Augmedics, it takes 10-20 seconds to calibrate the tools. The device also lets you visualize the implants, such as screws.

Elimelech says that the overall system accuracy is about 1.4mm. The FDA requires a level of accuracy below 2mm.

Remarkable, but hard to explain in words. Watch the video.

Challenges and Opportunities for a hypothetical Amazon Alexa-powered smart glasses

From Amazon working on Alexa-powered smart glasses, says report – The Verge

Amazon’s first wearable device will be a pair of smart glasses with the Alexa voice assistant built in, according to a report in the Financial Times. The device will reportedly look like a regular pair of glasses and use bone-conduction technology so that the user can hear Alexa without the need for earphones or conventional speakers. It won’t, however, likely have a screen or camera, although Google Glass founder Babak Parviz has apparently been working on the project following his hiring by Amazon in 2014

Google failed at this in the same way Microsoft failed at tablets before Apple introduced the iPad. Execution is everything, and maybe glasses that only offer voice user interface is a more manageable first step than featuring augmented vision too.

On the other side, so far, Amazon didn’t shine as a hardware vendor. Their Android phone, a primary vector for Alexa, was a failure. The other devices they sell are OK but not memorable, and not aesthetically pleasing (which becomes important in fashion accessories like glasses).

One final thought: Amazon long-term goal is to have Alexa everywhere, so these glasses will get increasingly cheaper (like Kindle devices do) or Amazon will find a way to apply the same technology to every glass on the market.

Portuguese startup develops a smart glove to increase strength and coordination by augmenting the motion of palm and digits

From The Nuada smart glove gives your hand bionic powers | TechCrunch

The Nuada is a smart glove. It gives back hand strength and coordination by augmenting the motions of your palm and digits. It acts as an electromechanical support system that lets you perform nearly superhuman feats or simply perform day-to-day tasks. The glove contains electronic tendons that can help the hand open and close and even perform basic motions and a sensor tells doctors and users about their pull strength, dexterity, and other metrics.

“We then use our own electromechanical system to support the user in the movement he wants to do,” said Quinaz. “This makes us able to support incredible weights with a small system, that needs much less energy to function. We can build the first mass adopted exoskeleton solutions with our technology.”

Stanford Researchers Create Organic Electronic Components That Dissolve Into The Body

From Scientists unveil ultra-thin electronics that can dissolve into the body

the team’s inventions include a biodegradable semi-conductive polymer, disintegrable and flexible electronic circuits, and a biodegradable substrate material for mounting these electrical components onto.

Totally flexible and biocompatible, the ultra-thin film substrate allows the components to be mounted onto both rough and smooth surfaces.

All together, the components can be used to create biocompatible, ultra-thin, lightweight and low-cost electronics for applications as diverse as wearable electronics to large-scale environmental surveys.

Maybe this is one of the many approaches we’ll use for biohacking or as wearable technology in the future.

SetPoint Medical Working On a Device That Emits Electrical Pulses To Treat Arthritis

From The shock tactics set to shake up immunology : Nature

The human vagus nerve contains around 100,000 individual nerve fibres, which branch out to reach various organs. But the amount of electricity needed to trigger neural activity can vary from fibre to fibre by as much as 50-fold.

Yaakov Levine, a former graduate student of Tracey’s, has worked out that the nerve fibres involved in reducing inflammation have a low activation threshold. They can be turned on with as little as 250-millionths of an amp — one-eighth the amount often used to suppress seizures. And although people treated for seizures require up to several hours of stimulation per day, animal experiments have suggested that a single, brief shock could control inflammation for a long time10. Macrophages hit by acetylcholine are unable to produce TNF-α for up to 24 hours, says Levine, who now works in Manhasset at SetPoint Medical, a company established to commercialize vagus-nerve stimulation as a medical treatment.

By 2011, SetPoint was ready to try the technique in humans, thanks to animal studies and Levine’s optimization efforts. That first trial was overseen by Paul-Peter Tak, a rheumatologist at the University of Amsterdam and at the UK-based pharmaceutical company GlaxoSmithKline. Over the course of several years, 18 people with rheumatoid arthritis have been implanted with stimulators, including Katrin.

For the images of the actual device, check Core77. They also have implantable bioelectronic devices.

ReVoice Develops a Glove to Translate Sign Language Into Speech

From This Glove Translates Sign Language Into Speech

Ayoub, who is currently a Ph.D. researcher at Goldsmiths College in London, designed the glove for anyone who relies on sign language to communicate, from deaf people to children who have non-verbal autism and communicate through gestures. To use it, you simply put the glove on and start signing. The glove translates the signs in real time into sentences that appear on a small screen on the wrist, which can then be read out loud using a small speaker.

Watch the video.

noonee Develops A Wearable Chair

From A Wearable Chair Designed to Improve Working Conditions that Involve Manual Labor – Core77

The Chairless Chair® is a flexible wearable ergonomic sitting support designed by Sapettiand produced by the Swiss-based company noonee.

The main application of the Chairless Chair® is for manufacturing companies, where workers are required to stand for long periods of time and traditional sitting methods are not suitable, leading to obstacles in the work area. While wearing the Chairless Chair, users walk together with sitting support without obstructing the work space. The position also avoids strenuous postures such as bending, squatting or crouching.

I wonder if the device impedes the act of running, in case of emergency.

Bionik Laboratories Explores Voice-Controlled Exoskeleton

From This exoskeleton can be controlled using Amazon’s Alexa – The Verge

Bionik Laboratories says it’s the first to add the digital assistant to a powered exoskeleton. The company has integrated Alexa with its lower-body Arke exoskeleton, allowing users to give voice commands like “Alexa, I’m ready to stand” or “Alexa, take a step.”

Movement of the Arke, which is currently in clinical development, is usually controlled by an app on a tablet or by reacting automatically to users’ movements. Sensors in the exoskeleton detect when the wearer shifts their weight, activating the motors in the backpack that help the individual move. For Bionik, adding Alexa can help individuals going through rehabilitation get familiar with these actions.

Voice-controlled exoskeleton is an interesting way to overcome the complexity of creating sophisticated brain-machine interfaces, but current technology has a lot of limitations. For example, Alexa doesn’t have yet voice fingerprinting, so anybody in the room could, maliciously or not, utter a command on behalf of the user and harm that person with an undesired exoskeleton movement at the wrong time.

Nonetheless, these are valuable baby steps. If you are interested in Bionik Laboratories, you can see a lot more in their on-stage presentation at IBM Insight conference in 2015.

Did you know that the wheelchair was invented 1500 years ago?

Timekettle is working on a real-time translation earpieces

From Timekettle’s WT2 real-time translation earpieces enable ordinary conversation across language barriers | TechCrunch

When you speak in English, there’s a short delay and then your interlocutor hears it in Mandarin Chinese (or whatever other languages are added later). They respond in Chinese, and you hear it in English — it’s really that simple.

and

The main issue I had was with the latency, which left Wells and I staring at each other silently for a three count while the app did its work. But the version I used wasn’t optimized for latency, and the team is hard at work reducing it.

“We’re trying to shorten the latency to 1-3 seconds, which needs lots of work in optimization of the whole process of data transmission between the earphones, app and server,”

Chinese researchers develop colour-shifting electronic skin

From Colour-shifting electronic skin could have wearable tech and prosthetic uses – IOP Publishing

researchers in China have developed a new type of user-interactive electronic skin, with a colour change perceptible to the human eye, and achieved with a much-reduced level of strain. Their results could have applications in robotics, prosthetics and wearable technology.

…the study from Tsinghua University in Beijing, employed flexible electronics made from graphene, in the form of a highly-sensitive resistive strain sensor, combined with a stretchable organic electrochromic device.

Orii builds a smart ring that turns a fingertip into an earpiece

From Orii smart ring turns your fingertip into a Bluetooth earpiece

you wear the ring on your index finger, and when it vibrates with an incoming call, simply lift your hand up, touch your fingertip on a sweet spot just before your ear, then chat away. An earlier crowdfunding project, the Sgnl smart strap (formerly TipTalk) by Korea’s Innomdle Lab, had the same idea, but it has yet to ship to backers long after its February target date this year.

The Orii is essentially an aluminum ring melded to a small package containing all the electronics. The main body on the latest working prototype came in at roughly 30 mm long, 20 mm wide and 12 mm thick.

Inflammation-free, gas-permeable, lightweight, stretchable on-skin electronics with nanomeshes

From Inflammation-free, gas-permeable, lightweight, stretchable on-skin electronics with nanomeshes : Nature Nanotechnology : Nature Research

Thin-film electronic devices can be integrated with skin for health monitoring and/or for interfacing with machines. Minimal invasiveness is highly desirable when applying wearable electronics directly onto human skin. However, manufacturing such on-skin electronics on planar substrates results in limited gas permeability. Therefore, it is necessary to systematically investigate their long-term physiological and psychological effects.

As a demonstration of substrate-free electronics, here we show the successful fabrication of inflammation-free, highly gas-permeable, ultrathin, lightweight and stretchable sensors that can be directly laminated onto human skin for long periods of time, realized with a conductive nanomesh structure. A one-week skin patch test revealed that the risk of inflammation caused by on-skin sensors can be significantly suppressed by using the nanomesh sensors.

Furthermore, a wireless system that can detect touch, temperature and pressure is successfully demonstrated using a nanomesh with excellent mechanical durability. In addition, electromyogram recordings were successfully taken with minimal discomfort to the user.

ReWalk Robotics makes progress with its soft exoskeleton

From ReWalk Robotics shows off a soft exosuit designed to bring mobility to stroke patients | TechCrunch

The version on display is still a prototype, but all of the functionality is in place, using a motorized pulley system to bring mobility to legs impacted by stroke.

The device, now known as the Restore soft-suit, relies on a motor built into a waistband that controls a pair of cables that operate similarly to bicycle brakes, lifting a footplate in the shoe and moving the whole leg in the process. The unaffected leg, meanwhile, has sensors that measure the wearer’s gait while walking, syncing up the two legs’ movement.

Progress in Smart Contact Lenses

From Smart Contact Lenses – How Far Away Are They? – Nanalyze

The idea of smart contact lenses isn’t as far away as you might think. The first problem that crops up is how exactly do we power the electronics in a set of “smart” contact lenses. As it turns out, we can use the energy of motion or kinetic energy. Every time the eye blinks, we get some power. Now that we have the power problem solved, there are at least several applications we can think of in order of easiest first:

  • Level 1 – Multifocal contact lenses like these from Visioneering Technologies, Inc. (VTI) or curing color blindness like these smart contact lenses called Colormax
  • Level 2 – Gathering information from your body – like glucose monitoring for diabetics
  • Level 3 – Augmenting your vision with digital overlay
  • Level 4 – Complete virtual reality (not sure if this is possible based on the eye symmetry but we can dream a dream)

So when we ask the question “how far away are we from having smart contact lenses” the answer isn’t that simple. The first level we have already achieved.

Scientists make a wearable substance that sticks to human skin without irritation and can record data

From This super-stretchy wearable feels like a second skin and can record data – The Verge

scientists have created a super-thin wearable that can record data through skin. That would make this wearable, which looks like a stylish gold tattoo, ideal for long-term medical monitoring — it’s already so comfortable that people forgot they were wearing it.

Most skin-based interfaces consist of electronics embedded in a substance, like plastic, that is then stuck onto the skin. Problem is, the plastic is often rigid or it doesn’t let you move and sweat. In a paper published today in the journal Nature Nanotechnology, scientists used a material that dissolves under water, leaving the electronic part directly on the skin and comfortable to bend and wear.

Google Glass Enterprise Edition gets adopted where it always meant to be

From Google Glass 2.0 Is a Startling Second Act | WIRED

Companies testing EE—including giants like GE, Boeing, DHL, and Volkswagen—have measured huge gains in productivity and noticeable improvements in quality. What started as pilot projects are now morphing into plans for widespread adoption in these corporations. Other businesses, like medical practices, are introducing Enterprise Edition in their workplaces to transform previously cumbersome tasks.

and

For starters, it makes the technology completely accessible for those who wear prescription lenses. The camera button, which sits at the hinge of the frame, does double duty as a release switch to remove the electronics part of unit (called the Glass Pod) from the frame. You can then connect it to safety glasses for the factory floor—EE now offers OSHA-certified safety shields—or frames that look like regular eyewear. (A former division of 3M has been manufacturing these specially for Enterprise Edition; if EE catches on, one might expect other frame vendors, from Warby Parker to Ray-Ban, to develop their own versions.)

Other improvements include beefed-up networking—not only faster and more reliable wifi, but also adherence to more rigorous security standards—and a faster processor as well. The battery life has been extended—essential for those who want to work through a complete eight-hour shift without recharging. (More intense usage, like constant streaming, still calls for an external battery.) The camera was upgraded from five megapixels to eight. And for the first time, a red light goes on when video is being recorded.

If Glass EE gains traction, and I believe so if it evolves into a platform for enterprise apps, Google will gain a huge amount of information and experience that can reuse on the AR contact lenses currently in the work.

Earpiece translates in (almost) real-time a conversation in two different languages

From Waverly Labs Pilot Translation Kit Release Date, Price and Specs – CNET

The heart of the process is Waverly’s app, which both and your friend need to download onto your phones (it’s free on both iOS and Android). Then, once you “sync” your conversation through a matching QR code on the app, you’re off and speaking. Press a button on the app and talk into the earpiece’s microphone to record what you want to say. Your voice is then piped through Waverly’s machine translation software which converts it to text on your friend’s app. If he also has his own earpiece, your friend will hear a translated version of what you said, albeit via a computer voice.

Language barrier issues won’t go away completely for years but will be significantly different.

AI and wearables powering the Post-Truth Era

From Anti AI AI — Wearable Artificial Intelligence – DT R&D

Near the end of 2017 we’ll be consuming content synthesised to mimic real people. Leaving us in a sea of disinformation powered by AI and machine learning. The media, giant tech corporations and citizens already struggle to discern fact from fiction. And as this technology is democratised it will be even more prevalent.

Preempting this we prototyped a device worn on the ear and connected to a neural net trained on real and synthetic voices called Anti AI AI. The device notifies the wearer when a synthetic voice is detected and cools the skin using a thermoelectric plate to alert the wearer the voice they are hearing was synthesised: by a cold, lifeless machine.

Mind-blowing.