Augmented Reality

Sub $1,000 smart glasses are a reality now

From North is trying to become the Warby Parker of augmented reality glasses – The Verge

The glasses show wearers a bunch of information from their phone; can call an Uber; and are extremely customizable to the point of requiring a 3D model of each wearers’ face to make them work.

Lake and his team took me through the purchasing process, which involves sitting in a dark room surrounded by 16 cameras and one attendant. I had to put my hair back in a cotton headband (that I got to keep!) and line my face up with a pair of software-created glasses on a screen. The cameras then took a bunch of photos simultaneously to create a 3D model of my ears, nose, eyes, and face.

and

Each Focals pair features a tiny, color laser in the right arm that displays information from your phone over Bluetooth. That laser bounces off a piece of photopolymer material built into the glasses’ right lens, then heads into your eye. It creates a 15-degree viewing area that’s about 300 x 300 pixels.

and

North built custom software for the glasses and designed the UI in-house. It’s colorful with slight animations that I think look nice. You can view your messages, send automated responses that North crafted through SMS, call an Uber, get turn-by-turn directions through Mapbox, view your calendar, and check the weather.

The image will automatically disappear after three seconds of non-use, which I wish came with the option to be extended.

Each pair has enough battery to last 18 hours, North says, and can be recharged only through their companion case. This case also charges the essential Focals accessory: the Loop. The Loop is a plastic ring with a joystick-like button that looks like any plastic smart ring you’ve seen on the market. It’s bulky and doesn’t look so nice, but it allows wearers to swipe through their glasses’ interface without having to touch their glasses or do something with their head. A ring makes way more sense to me, although again, it’s ugly.

You can swipe through your notifications by pushing left or right on the Loop joystick and pressing down to make a selection. You can also use it to trigger Amazon’s Alexa assistant because yes, Alexa is built-in. The glasses have a microphone and speaker inside, so you can issue commands to Alexa and hear responses if necessary. (Amazon was a leading investor in North’s Series B funding.)

A pair costs $999, which includes lenses, the prescription, anti-glare coatings, and the fitting.

Still far away from a mainstream product, but getting there. Dangerously fast. In fact, Amazon is a leading investor.

Also, it will be interesting to see what the scientific and medical community finds out about a laser projected on a human retina in terms of attention reduction and/or sleep pattern disruption.

It is the end of the poker face

From Poppy Crum: Technology that knows what you’re feeling | TED Talk

Your pupil doesn’t lie. Your eye gives away your poker face. When your brain’s having to work harder, your autonomic nervous system drives your pupil to dilate. When it’s not, it contracts. When I take away one of the voices, the cognitive effort to understand the talkers gets a lot easier. I could have put the two voices in different spatial locations, I could have made one louder. You would have seen the same thing. We might think we have more agency over the reveal of our internal state than that spider, but maybe we don’t.

Must-watch.

The moment a company brings to market a mainstream AR wearable, like smart contact lenses, that can act as an application platform, like iOS, and supports the installation of third-party applications through a marketplace, like the App Store, there will be a rush to develop AI apps that can read people’s behaviour in real time, in a way that most human brains cannot.

It doesn’t matter if the intentions are good. Such applications would expose vulnerabilities we are not prepared to defend against.

From Augmented Reality to Altered Reality

From Dehumanization of Warfare: Legal Implications of New Weapon Technologies:

However, where soldiers are equipped with cybernetic implants (brain-machine interfaces) which mediate between an information source and the brain, the right to “receive and impart information without interference from a public authority” gains a new dimension. There are many technologies which provide additional information to armed forces personnel, e.g., heads-up displays for fighter pilots and the Q-warrior augmented reality helmets from BAE Systems, which are unlikely to impact this right.

However, there are technologies in development which are intended to filter data in order to prevent information overload. This may be particularly relevant where the implant or prosthetic removes visual information from view, or is designed to provide targeting information to the soldier. According to reports, software has been devised in Germany which allows for the deletion of visual information by smart glass or contact lens.

As one futurist was quoted as saying “So if you decide you don’t like homeless people in your city, and you use this software and implant it in your contact lenses, then you won’t see them at all.”

An entire section of this book is dedicated to the legal and ethical implications of using supersoldiers, augmented by bionic prosthetics, augmented reality devices, and neural interfaces, in modern warfare. Highly recommended.

The book is now featured in the “Key Books” section of H+.

Our results are nearly indistinguishable from the real video

From Forget DeepFakes, Deep Video Portraits are way better (and worse) | TechCrunch

Deep Video Portraits is the title of a paper submitted for consideration this August at SIGGRAPH; it describes an improved technique for reproducing the motions, facial expressions, and speech movements of one person using the face of another.

and

There’s no way to make a person do something or make an expression that’s too far from what they do on camera, though. For instance, the system can’t synthesize a big grin if the person is looking sour the whole time (though it might try and fail hilariously). And naturally there are all kinds of little bugs and artifacts. So for now the hijinks are limited.

Astounding results. You must watch the video.

Now, what happens if this video editing happens in real time to alter the reality perceived through AR glasses? For example, the ones a soldier might use.

AR glasses further augmented by human assistants 

From Aira’s new smart glasses give blind users a guide through the visual world | TechCrunch

Aira has built a service that basically puts a human assistant into a blind user’s ear by beaming live-streaming footage from the glasses camera to the company’s agents who can then give audio instructions to the end users. The guides can present them with directions or describe scenes for them. It’s really the combination of the high-tech hardware and highly attentive assistants.

The hardware the company has run this service on in the past has been a bit of a hodgepodge of third-party solutions. This month, the company began testing its own smart glasses solution called the Horizon Smart Glasses, which are designed from the ground-up to be the ideal solution for vision-impaired users.

The company charges based on usage; $89 per month will get users the device and up to 100 minutes of usage. There are various pricing tiers for power users who need a bit more time.

The glasses integrate a 120-degree wide-angle camera so guides can gain a fuller picture of a user’s surroundings and won’t have to instruct them to point their head in a different direction quite as much. It’s powered by what the startup calls the Aira Horizon Controller, which is actually just a repurposed Samsung smartphone that powers the device in terms of compute, battery and network connection. The controller is appropriately controlled entirely through the physical buttons and also can connect to a user’s smartphone if they want to route controls through the Aira mobile app.

Interesting hybrid implementation and business model, but I have serious doubts that a solution depending on human assistants can scale at a planetary level, or retain the necessary quality of service at that scale.

Eye Tracking For AR Devices?

From Eye Tracking Is Coming to Virtual Reality Sooner Than You Think. What Now? | WIRED

That button had activated the eye-tracking technology of of Tobii, the Swedish company where Karlén is a director of product management for VR. Two cameras inside the headset had begun watching my eyes, illuminating them with near-IR light, and making sure that my avatar’s eyes did exactly what mine did.

Tobii isn’t the only eye-tracking company around, but with 900 employees, it may be the largest. And while the Swedish company has been around since 2006, Qualcomm’s prototype headset—and the latest version of its Snapdragon mobile-VR platform, which it unveiled at the Game Developers Conference in San Francisco this week—marks the first time that eye-tracking is being included in a mass-produced consumer VR device.

and

Eye-tracking unlocks “foveated rendering,” a technique in which graphical fidelity is only prioritized for the tiny portion of the display your pupils are focused on. For Tobii’s version, that’s anywhere from one-tenth to one-sixteenth of the display; everything outside that area can be dialed down as much as 40 or 50 percent without you noticing, which means less load on the graphics processor. VR creators can leverage that luxury in order to coax current-gen performance out of a last-gen GPU, or achieve a higher frame rate than they might otherwise be able to.

That’s just the ones and zeros stuff. There are compelling interface benefits as well. Generally, input in VR is a three-step process: look at something, point at it to select it, then click to input the selection. When your eyes become the selection tool, those first two steps become one. It’s almost like a smartphone, where pointing collapses the selection and click into a single step. And because you’re using your eyes and not your head, that means less head motion, less fatigue, less chance for discomfort.

and

There’s also that whole cameras-watching-your-eyes thing. Watching not just what your eyes are doing, but where they look and for how long—in other words, tracking your attention. That’s the kind of information advertisers and marketers would do just about anything to get their hands on. One study has even shown that gaze-tracking can be (mis)used to influence people’s biases and decision-making.

“We take a very hard, open stance,” he says. “Pictures of your eyes never go to developers—only gaze direction. We do not allow applications to store or transfer eye-tracking data or aggregate over multiple users. It’s not storable, and it doesn’t leave the device.”

Tobii does allow for analytic collection, Werner allows; the company has a business unit focused on working with research facilities and universities. He points to eye-tracking’s potential as a diagnostic tool for autism spectrum disorders, to its applications for phobia research. But anyone using that analytical license, he says, must inform users and make eye-tracking data collection an opt-in process.

There is no reason why eye tracking couldn’t do the same things (and pose the same risks) in AR devices.

Why Augmented-Reality Glasses Are Ugly

From Why Do Augmented-Reality Glasses Look So Bad? | WIRED

“The battle is between immersive functionality and non-dorky, even cool-looking design. The holy grail is something that not only resembles a normal pair of, say, Gucci glasses, but has functionality that augments your life in a meaningful way.”Right now, that demands a trade-off. The best AR displays require bulky optical hardware to optimize resolution and provide a wide field-of-view. That makes it possible to do all kinds of cool things in augmented reality. But early versions, like the Meta 2 AR headset, look more like an Oculus Rift than a pair of Warby Parkers. Slimmer AR displays, like the used in Google Glass, feel more natural to wear, but they sit above or next to the normal field of vision, so they’re are less immersive and less functional. Adding other features to the glasses—a microphone, a decent camera, various sensors—also increases bulk and makes it harder to create something comfortable or stylish.This tension has split the field of AR glasses into two extremes. On one end, you get hulking glasses packed with features to show off the unbridled potential of augmented reality. On the other end, you sacrifice features to make a wearable that looks and feels more like normal eyewear.

Police in China have begun using sunglasses equipped with facial recognition technology

From Chinese police spot suspects with surveillance sunglasses – BBC News

The glasses are connected to an internal database of suspects, meaning officers can quickly scan crowds while looking for fugitives.

The sunglasses have already helped police capture seven suspects, according to Chinese state media.

The seven people who were apprehended are accused of crimes ranging from hit-and-runs to human trafficking.

and

The technology allows police officers to take a photograph of a suspicious individual and then compare it to pictures stored in an internal database. If there is a match, information such as the person’s name and address will then be sent to the officer.

An estimated 170 million CCTV cameras are already in place and some 400 million new ones are expected be installed in the next three years.

Many of the cameras use artificial intelligence, including facial recognition technology.

In December 2017, I published Our Machines Can Very Easily Recognise You Among At Least 2 Billion People in a Matter of Seconds. It didn’t take long to go from press claims to real-world implementation.

Human augmentation 2.0 is already here, just not evenly distributed.

Our machines can very easily recognise you among at least 2 billion people in a matter of seconds

From Doctor, border guard, policeman – artificial intelligence in China and its mind-boggling potential to do right, or wrong | South China Morning Post

Yitu’s Dragonfly Eye generic portrait platform already has 1.8 billion photographs to work with: those logged in the national database and you, if you have visited China recently. Yitu will not say whether Hong Kong identity card holders have been logged in the government’s database, for which the company provides navigation software and algor­ithms, but 320 million of the photos have come from China’s borders, including ports and airports, where pictures are taken of everyone who enters and leaves the country.

According to Yitu, its platform is also in service with more than 20 provincial public security departments, and is used as part of more than 150 municipal public security systems across the country, and Dragonfly Eye has already proved its worth. On its very first day of operation on the Shanghai Metro, in January, the system identified a wanted man when he entered a station. After matching his face against the database, Dragonfly Eye sent his photo to a policeman, who made an arrest. In the following three months, 567 suspected lawbreakers were caught on the city’s underground network.

Imagine this performed by a human eye augmented by AR lenses or glasses.

If you think that humans will confine this sort of applications to a computer at your desk or inside your pocket, you are delusional.

AR glasses competition starts to get real

From Daqri ships augmented reality smart glasses for professionals | VentureBeat

At $4,995, the system is not cheap, but it is optimized to present complex workloads and process a lot of data right on the glasses themselves.

and

The Daqri is powered by a Visual Operating System (VOS) and weighs 0.7 pounds. The glasses have a 44-degree field of view and use an Intel Core m7 processor running at 3.1 gigahertz. They run at 90 frames per second and have a resolution of 1360 x 768. They also connect via Bluetooth or Wi-Fi and have sensors such as a wide-angle tracking camera, a depth-sensing camera, and an HD color camera for taking photos and videos.

Olympus just presented a competing product for $1500.

Olympus EyeTrek is a $1,500 open-source, enterprise-focused smart glasses product

From Olympus made $1,500 open-source smart glasses – The Verge

The El-10 can be mounted on all sorts of glasses, from regular to the protective working kind. It has a tiny 640 x 400 OLED display that, much like Google Glass, sits semi-transparently in the corner of your vision when you wear the product on your face. A small forward-facing camera can capture photos and videos, or even beam footage back to a supervisor in real time. The El-10 runs Android 4.2.2 Jelly Bean and comes with only a bare-bones operating system, as Olympus is pushing the ability to customize it

It’s really cool that it can be mounted on any pair of glasses. Olympus provides clips of various sizes to adjust to multiple frames. It weights 66g.

The manual mentions multiple built-in apps: image and video players, a camera (1280x720px), a video recorder (20fps, up to 30min recording), and the QR scanner. It connects to other things via Bluetooth or wireless network.

You can download the Software Development Kit here.
It includes a Windows program to develop new apps, an Android USB driver, an Android app to generate QR codes, and a couple of sample apps.

BAE Systems working on eye tracking for military digital helmets

From How wearable technology is transforming fighter pilots’ roles

In the past, eye-tracking technology has had a bad press. “Using eye blink or dwell for cockpit control selection led to the so called ‘Midas touch’ phenomenon, where people could inadvertently switch things on or off just by looking,” says Ms Page. But combine a gaze with a second control and the possibilities are vast. “Consider the mouse, a genius piece of technology. Three buttons but great versatility.” Pilots, she says, could activate drop-down menus with their gaze, and confirm their command with the click of a button at their fingers.

In future, eye-tracking might be used to assess a pilot’s physiological state. “There’s evidence parameters about the eye can tell us about an individual’s cognitive workload,” says Ms Page.

Eye-tracking technology could also monitor how quickly a pilot is learning the ropes, allowing training to be better tailored. “Instead of delivering a blanket 40 hours to everyone, for instance, you could cut training for those whose eye data suggest they are monitoring the correct information and have an acceptable workload level, and allow longer for those who need it.”

Two thoughts:

  • Obviously, human augmentation is initially focusing on vision, but that’s just the beginning. Our brain seems to be capable of processing any input, extract a meaningful pattern out of it, and use to improve our understanding of the world. I expect the auditory system to be the next AR focus. I’d assume augmented earing would be especially useful in ground combat.
  • We are visual creatures so we are naturally inclined to assume that the large portion of our neocortex dedicated to image processing will be able to deal with even more data coming in. What if it’s a wrong assumption?

AR glasses for surgical navigation reach 1.4mm accuracy

From Augmedics is building augmented reality glasses for spinal surgery | TechCrunch

Vizor is a sort of eyewear with clear glasses. But it can also project your patient’s spine in 3D so that you can locate your tools in real time even if it’s below the skin. It has multiple sensors to detect your head movements as well.

Hospitals first have to segment the spine from the rest of the scan, such as soft tissue. They already have all the tools they need to do it themselves.

Then, doctors have to place markers on the patient’s body to register the location of the spine. This way, even if the patient moves while breathing, Vizor can automatically adjust the position of the spine in real time.

Surgeons also need to put markers on standard surgical tools. After a calibration process, Vizor can precisely display the orientation of the tools during the operation. According to Augmedics, it takes 10-20 seconds to calibrate the tools. The device also lets you visualize the implants, such as screws.

Elimelech says that the overall system accuracy is about 1.4mm. The FDA requires a level of accuracy below 2mm.

Remarkable, but hard to explain in words. Watch the video.

Real-time people and object recognition for check-out at a retail shop

From Autonomous Checkout, Real Time System v0.21 – YouTube

This is a real time demonstration of our autonomous checkout system, running at 30 FPS. This system includes our models for person detection, entity tracking, item detection, item classification, ownership resolution, action analysis, and shopper inventory analysis, all working together to visualize which person has what item in real time.

A few days ago, I shared a TED Talk about real-time face recognition. It was impressive. What I am sharing right now is even more impressive: real-time people and object recognition during online shopping.

Online shopping is just one (very lucrative) application. The technology shown in this video has been developed by a company called Standard Cognition, but it’s very likely similar to the one that Amazon is testing in their first retail shop.

Of course, there are many other applications, like surveillance for law enforcement, or information gathering for “smart communication”. Imagine this technology used in augmented reality.

Once smart contact lenses will be a reality, this will be inevitable.

Univet Develops Modular AR Glasses with Interchangeable Lenses

From Red Dot Award Winning Tools: Univet 5.0 Augmented Reality Safety Glasses – Core77

Italian safety equipment manufacturer Univet just received a 2017 Red Dot Award for its augmented reality safety glasses. The glasses integrate Sony’s “holographic waveguide technology” into eye protection that allows wearers to view real time data without looking up from what they are doing.

A monocular projection system displays data on a holographic screen behind the right protective lens. The screen is clear so the wearer can see through it.
The glasses can use WiFi or Bluetooth to access data on computers, tables, and smartphones. Information is able to travel in both directions so data can be collected from internal sensors such as the GPS and microphone or optional sensors such as thermometers and cameras.

Take a look at the pictures and videos.

Omega Ophthalmics turning the human eye into a platform for AR

From Omega Ophthalmics is an eye implant platform with the power of continuous AR | TechCrunch

… lens implants aren’t a new thing. Implanted lenses are commonly used as a solve for cataracts and other degenerative diseases mostly affecting senior citizens; about 3.6 million patients in the U.S. get some sort of procedure for the disease every year.

Cataract surgery involves removal of the cloudy lens and replacing it with a thin artificial type of lens. Co-founder and board-certified ophthalmologist Gary Wortz saw an opportunity here to offer not just a lens but a platform to which other manufacturers could add different interactive sensors, drug delivery devices and the inclusion of AR/VR integration.

Maybe there’s a surprisingly large audience among the over 60 that is willing to try and get a second youth through biohacking. Maybe over 60s will become the first true augmented humans.

Progress in Smart Contact Lenses

From Smart Contact Lenses – How Far Away Are They? – Nanalyze

The idea of smart contact lenses isn’t as far away as you might think. The first problem that crops up is how exactly do we power the electronics in a set of “smart” contact lenses. As it turns out, we can use the energy of motion or kinetic energy. Every time the eye blinks, we get some power. Now that we have the power problem solved, there are at least several applications we can think of in order of easiest first:

  • Level 1 – Multifocal contact lenses like these from Visioneering Technologies, Inc. (VTI) or curing color blindness like these smart contact lenses called Colormax
  • Level 2 – Gathering information from your body – like glucose monitoring for diabetics
  • Level 3 – Augmenting your vision with digital overlay
  • Level 4 – Complete virtual reality (not sure if this is possible based on the eye symmetry but we can dream a dream)

So when we ask the question “how far away are we from having smart contact lenses” the answer isn’t that simple. The first level we have already achieved.

Google Glass Enterprise Edition gets adopted where it always meant to be

From Google Glass 2.0 Is a Startling Second Act | WIRED

Companies testing EE—including giants like GE, Boeing, DHL, and Volkswagen—have measured huge gains in productivity and noticeable improvements in quality. What started as pilot projects are now morphing into plans for widespread adoption in these corporations. Other businesses, like medical practices, are introducing Enterprise Edition in their workplaces to transform previously cumbersome tasks.

and

For starters, it makes the technology completely accessible for those who wear prescription lenses. The camera button, which sits at the hinge of the frame, does double duty as a release switch to remove the electronics part of unit (called the Glass Pod) from the frame. You can then connect it to safety glasses for the factory floor—EE now offers OSHA-certified safety shields—or frames that look like regular eyewear. (A former division of 3M has been manufacturing these specially for Enterprise Edition; if EE catches on, one might expect other frame vendors, from Warby Parker to Ray-Ban, to develop their own versions.)

Other improvements include beefed-up networking—not only faster and more reliable wifi, but also adherence to more rigorous security standards—and a faster processor as well. The battery life has been extended—essential for those who want to work through a complete eight-hour shift without recharging. (More intense usage, like constant streaming, still calls for an external battery.) The camera was upgraded from five megapixels to eight. And for the first time, a red light goes on when video is being recorded.

If Glass EE gains traction, and I believe so if it evolves into a platform for enterprise apps, Google will gain a huge amount of information and experience that can reuse on the AR contact lenses currently in the work.

While everybody develops bulky AR units, WaveOptics builds micro displays for AR

From WaveOptics raises $15.5 million for augmented reality displays | VentureBeat

While a number of major manufacturers are building the full AR systems (including the optics, sensors, camera, and head-mounted unit), WaveOptics is focused on developing the underlying optics to deliver an enhanced AR experience.

The core of the WaveOptics technology is a waveguide that is able to channel light input from a micro display positioned at the periphery of a lens made of glass — or in the future, plastic. Unlike conventional technologies that rely on cumbersome prisms, mirrors, or scarce materials, WaveOptics’ optical design harnesses waveguide hologram physics and photonic crystals, which enable lightweight design with good optical performance, the company said.

Flight attendants with HoloLens – What could possibly go wrong? 

From Will HoloLens turn air travelers into mixed-reality characters? – GeekWire

Imagine a world where headset-wearing flight attendants can instantly know how you’re feeling based on a computer analysis of your facial expression.

Actually, you don’t need to imagine: That world is already in beta, thanks to Air New Zealand, Dimension Data and Microsoft HoloLens.

In May, the airline announced that it was testing HoloLens’ mixed-reality system as a tool for keeping track of passengers’ preferences in flight – for example, their favorite drink and preferred menu items. And if the imaging system picked up the telltale furrowed brow of an anxious flier, that could be noted in an annotation displayed to the flight attendant through the headset.

Google already failed at this. The only places where AR glasses would be socially accepted are those ones where personnel with equipment is the norm.

It would take years, if not decades, for people to accept the idea that flight attendants must have a special equipment to serve drinks.

Google couldn’t see what we see with glasses, so they are trying through our smartphones

From Google Lens offers a snapshot of the future for augmented reality and AI | AndroidAuthority

At the recent I/0 2017, Google stated that we were at an inflexion point with vision. In other words, it’s now more possible than ever before for a computer to look at a scene and dig out the details and understand what’s going on. Hence: Google Lens.This improvement comes courtesy of machine learning, which allows companies like Google to acquire huge amounts of data and then create systems that utilize that data in useful ways. This is the same technology underlying voice assistants and even your recommendations on Spotify to a lesser extent.