Augmented Reality

BAE Systems working on eye tracking for military digital helmets

From How wearable technology is transforming fighter pilots’ roles

In the past, eye-tracking technology has had a bad press. “Using eye blink or dwell for cockpit control selection led to the so called ‘Midas touch’ phenomenon, where people could inadvertently switch things on or off just by looking,” says Ms Page. But combine a gaze with a second control and the possibilities are vast. “Consider the mouse, a genius piece of technology. Three buttons but great versatility.” Pilots, she says, could activate drop-down menus with their gaze, and confirm their command with the click of a button at their fingers.

In future, eye-tracking might be used to assess a pilot’s physiological state. “There’s evidence parameters about the eye can tell us about an individual’s cognitive workload,” says Ms Page.

Eye-tracking technology could also monitor how quickly a pilot is learning the ropes, allowing training to be better tailored. “Instead of delivering a blanket 40 hours to everyone, for instance, you could cut training for those whose eye data suggest they are monitoring the correct information and have an acceptable workload level, and allow longer for those who need it.”

Two thoughts:

  • Obviously, human augmentation is initially focusing on vision, but that’s just the beginning. Our brain seems to be capable of processing any input, extract a meaningful pattern out of it, and use to improve our understanding of the world. I expect the auditory system to be the next AR focus. I’d assume augmented earing would be especially useful in ground combat.
  • We are visual creatures so we are naturally inclined to assume that the large portion of our neocortex dedicated to image processing will be able to deal with even more data coming in. What if it’s a wrong assumption?

AR glasses for surgical navigation reach 1.4mm accuracy

From Augmedics is building augmented reality glasses for spinal surgery | TechCrunch

Vizor is a sort of eyewear with clear glasses. But it can also project your patient’s spine in 3D so that you can locate your tools in real time even if it’s below the skin. It has multiple sensors to detect your head movements as well.

Hospitals first have to segment the spine from the rest of the scan, such as soft tissue. They already have all the tools they need to do it themselves.

Then, doctors have to place markers on the patient’s body to register the location of the spine. This way, even if the patient moves while breathing, Vizor can automatically adjust the position of the spine in real time.

Surgeons also need to put markers on standard surgical tools. After a calibration process, Vizor can precisely display the orientation of the tools during the operation. According to Augmedics, it takes 10-20 seconds to calibrate the tools. The device also lets you visualize the implants, such as screws.

Elimelech says that the overall system accuracy is about 1.4mm. The FDA requires a level of accuracy below 2mm.

Remarkable, but hard to explain in words. Watch the video.

Real-time people and object recognition for check-out at a retail shop

From Autonomous Checkout, Real Time System v0.21 – YouTube

This is a real time demonstration of our autonomous checkout system, running at 30 FPS. This system includes our models for person detection, entity tracking, item detection, item classification, ownership resolution, action analysis, and shopper inventory analysis, all working together to visualize which person has what item in real time.

A few days ago, I shared a TED Talk about real-time face recognition. It was impressive. What I am sharing right now is even more impressive: real-time people and object recognition during online shopping.

Online shopping is just one (very lucrative) application. The technology shown in this video has been developed by a company called Standard Cognition, but it’s very likely similar to the one that Amazon is testing in their first retail shop.

Of course, there are many other applications, like surveillance for law enforcement, or information gathering for “smart communication”. Imagine this technology used in augmented reality.

Once smart contact lenses will be a reality, this will be inevitable.

Univet Develops Modular AR Glasses with Interchangeable Lenses

From Red Dot Award Winning Tools: Univet 5.0 Augmented Reality Safety Glasses – Core77

Italian safety equipment manufacturer Univet just received a 2017 Red Dot Award for its augmented reality safety glasses. The glasses integrate Sony’s “holographic waveguide technology” into eye protection that allows wearers to view real time data without looking up from what they are doing.

A monocular projection system displays data on a holographic screen behind the right protective lens. The screen is clear so the wearer can see through it.
The glasses can use WiFi or Bluetooth to access data on computers, tables, and smartphones. Information is able to travel in both directions so data can be collected from internal sensors such as the GPS and microphone or optional sensors such as thermometers and cameras.

Take a look at the pictures and videos.

Omega Ophthalmics turning the human eye into a platform for AR

From Omega Ophthalmics is an eye implant platform with the power of continuous AR | TechCrunch

… lens implants aren’t a new thing. Implanted lenses are commonly used as a solve for cataracts and other degenerative diseases mostly affecting senior citizens; about 3.6 million patients in the U.S. get some sort of procedure for the disease every year.

Cataract surgery involves removal of the cloudy lens and replacing it with a thin artificial type of lens. Co-founder and board-certified ophthalmologist Gary Wortz saw an opportunity here to offer not just a lens but a platform to which other manufacturers could add different interactive sensors, drug delivery devices and the inclusion of AR/VR integration.

Maybe there’s a surprisingly large audience among the over 60 that is willing to try and get a second youth through biohacking. Maybe over 60s will become the first true augmented humans.

Progress in Smart Contact Lenses

From Smart Contact Lenses – How Far Away Are They? – Nanalyze

The idea of smart contact lenses isn’t as far away as you might think. The first problem that crops up is how exactly do we power the electronics in a set of “smart” contact lenses. As it turns out, we can use the energy of motion or kinetic energy. Every time the eye blinks, we get some power. Now that we have the power problem solved, there are at least several applications we can think of in order of easiest first:

  • Level 1 – Multifocal contact lenses like these from Visioneering Technologies, Inc. (VTI) or curing color blindness like these smart contact lenses called Colormax
  • Level 2 – Gathering information from your body – like glucose monitoring for diabetics
  • Level 3 – Augmenting your vision with digital overlay
  • Level 4 – Complete virtual reality (not sure if this is possible based on the eye symmetry but we can dream a dream)

So when we ask the question “how far away are we from having smart contact lenses” the answer isn’t that simple. The first level we have already achieved.

Google Glass Enterprise Edition gets adopted where it always meant to be

From Google Glass 2.0 Is a Startling Second Act | WIRED

Companies testing EE—including giants like GE, Boeing, DHL, and Volkswagen—have measured huge gains in productivity and noticeable improvements in quality. What started as pilot projects are now morphing into plans for widespread adoption in these corporations. Other businesses, like medical practices, are introducing Enterprise Edition in their workplaces to transform previously cumbersome tasks.

and

For starters, it makes the technology completely accessible for those who wear prescription lenses. The camera button, which sits at the hinge of the frame, does double duty as a release switch to remove the electronics part of unit (called the Glass Pod) from the frame. You can then connect it to safety glasses for the factory floor—EE now offers OSHA-certified safety shields—or frames that look like regular eyewear. (A former division of 3M has been manufacturing these specially for Enterprise Edition; if EE catches on, one might expect other frame vendors, from Warby Parker to Ray-Ban, to develop their own versions.)

Other improvements include beefed-up networking—not only faster and more reliable wifi, but also adherence to more rigorous security standards—and a faster processor as well. The battery life has been extended—essential for those who want to work through a complete eight-hour shift without recharging. (More intense usage, like constant streaming, still calls for an external battery.) The camera was upgraded from five megapixels to eight. And for the first time, a red light goes on when video is being recorded.

If Glass EE gains traction, and I believe so if it evolves into a platform for enterprise apps, Google will gain a huge amount of information and experience that can reuse on the AR contact lenses currently in the work.

While everybody develops bulky AR units, WaveOptics builds micro displays for AR

From WaveOptics raises $15.5 million for augmented reality displays | VentureBeat

While a number of major manufacturers are building the full AR systems (including the optics, sensors, camera, and head-mounted unit), WaveOptics is focused on developing the underlying optics to deliver an enhanced AR experience.

The core of the WaveOptics technology is a waveguide that is able to channel light input from a micro display positioned at the periphery of a lens made of glass — or in the future, plastic. Unlike conventional technologies that rely on cumbersome prisms, mirrors, or scarce materials, WaveOptics’ optical design harnesses waveguide hologram physics and photonic crystals, which enable lightweight design with good optical performance, the company said.

Flight attendants with HoloLens – What could possibly go wrong? 

From Will HoloLens turn air travelers into mixed-reality characters? – GeekWire

Imagine a world where headset-wearing flight attendants can instantly know how you’re feeling based on a computer analysis of your facial expression.

Actually, you don’t need to imagine: That world is already in beta, thanks to Air New Zealand, Dimension Data and Microsoft HoloLens.

In May, the airline announced that it was testing HoloLens’ mixed-reality system as a tool for keeping track of passengers’ preferences in flight – for example, their favorite drink and preferred menu items. And if the imaging system picked up the telltale furrowed brow of an anxious flier, that could be noted in an annotation displayed to the flight attendant through the headset.

Google already failed at this. The only places where AR glasses would be socially accepted are those ones where personnel with equipment is the norm.

It would take years, if not decades, for people to accept the idea that flight attendants must have a special equipment to serve drinks.

Google couldn’t see what we see with glasses, so they are trying through our smartphones

From Google Lens offers a snapshot of the future for augmented reality and AI | AndroidAuthority

At the recent I/0 2017, Google stated that we were at an inflexion point with vision. In other words, it’s now more possible than ever before for a computer to look at a scene and dig out the details and understand what’s going on. Hence: Google Lens.This improvement comes courtesy of machine learning, which allows companies like Google to acquire huge amounts of data and then create systems that utilize that data in useful ways. This is the same technology underlying voice assistants and even your recommendations on Spotify to a lesser extent.