From Doctor, border guard, policeman – artificial intelligence in China and its mind-boggling potential to do right, or wrong | South China Morning Post
Yitu’s Dragonfly Eye generic portrait platform already has 1.8 billion photographs to work with: those logged in the national database and you, if you have visited China recently. Yitu will not say whether Hong Kong identity card holders have been logged in the government’s database, for which the company provides navigation software and algorithms, but 320 million of the photos have come from China’s borders, including ports and airports, where pictures are taken of everyone who enters and leaves the country.
According to Yitu, its platform is also in service with more than 20 provincial public security departments, and is used as part of more than 150 municipal public security systems across the country, and Dragonfly Eye has already proved its worth. On its very first day of operation on the Shanghai Metro, in January, the system identified a wanted man when he entered a station. After matching his face against the database, Dragonfly Eye sent his photo to a policeman, who made an arrest. In the following three months, 567 suspected lawbreakers were caught on the city’s underground network.
Imagine this performed by a human eye augmented by AR lenses or glasses.
If you think that humans will confine this sort of applications to a computer at your desk or inside your pocket, you are delusional.
From Daqri ships augmented reality smart glasses for professionals | VentureBeat
At $4,995, the system is not cheap, but it is optimized to present complex workloads and process a lot of data right on the glasses themselves.
The Daqri is powered by a Visual Operating System (VOS) and weighs 0.7 pounds. The glasses have a 44-degree field of view and use an Intel Core m7 processor running at 3.1 gigahertz. They run at 90 frames per second and have a resolution of 1360 x 768. They also connect via Bluetooth or Wi-Fi and have sensors such as a wide-angle tracking camera, a depth-sensing camera, and an HD color camera for taking photos and videos.
Olympus just presented a competing product for $1500.
From Olympus made $1,500 open-source smart glasses – The Verge
The El-10 can be mounted on all sorts of glasses, from regular to the protective working kind. It has a tiny 640 x 400 OLED display that, much like Google Glass, sits semi-transparently in the corner of your vision when you wear the product on your face. A small forward-facing camera can capture photos and videos, or even beam footage back to a supervisor in real time. The El-10 runs Android 4.2.2 Jelly Bean and comes with only a bare-bones operating system, as Olympus is pushing the ability to customize it
It’s really cool that it can be mounted on any pair of glasses. Olympus provides clips of various sizes to adjust to multiple frames. It weights 66g.
The manual mentions multiple built-in apps: image and video players, a camera (1280x720px), a video recorder (20fps, up to 30min recording), and the QR scanner. It connects to other things via Bluetooth or wireless network.
You can download the Software Development Kit here.
It includes a Windows program to develop new apps, an Android USB driver, an Android app to generate QR codes, and a couple of sample apps.
From How wearable technology is transforming fighter pilots’ roles
In the past, eye-tracking technology has had a bad press. “Using eye blink or dwell for cockpit control selection led to the so called ‘Midas touch’ phenomenon, where people could inadvertently switch things on or off just by looking,” says Ms Page. But combine a gaze with a second control and the possibilities are vast. “Consider the mouse, a genius piece of technology. Three buttons but great versatility.” Pilots, she says, could activate drop-down menus with their gaze, and confirm their command with the click of a button at their fingers.
In future, eye-tracking might be used to assess a pilot’s physiological state. “There’s evidence parameters about the eye can tell us about an individual’s cognitive workload,” says Ms Page.
Eye-tracking technology could also monitor how quickly a pilot is learning the ropes, allowing training to be better tailored. “Instead of delivering a blanket 40 hours to everyone, for instance, you could cut training for those whose eye data suggest they are monitoring the correct information and have an acceptable workload level, and allow longer for those who need it.”
- Obviously, human augmentation is initially focusing on vision, but that’s just the beginning. Our brain seems to be capable of processing any input, extract a meaningful pattern out of it, and use to improve our understanding of the world. I expect the auditory system to be the next AR focus. I’d assume augmented earing would be especially useful in ground combat.
- We are visual creatures so we are naturally inclined to assume that the large portion of our neocortex dedicated to image processing will be able to deal with even more data coming in. What if it’s a wrong assumption?
From Augmedics is building augmented reality glasses for spinal surgery | TechCrunch
Vizor is a sort of eyewear with clear glasses. But it can also project your patient’s spine in 3D so that you can locate your tools in real time even if it’s below the skin. It has multiple sensors to detect your head movements as well.
Hospitals first have to segment the spine from the rest of the scan, such as soft tissue. They already have all the tools they need to do it themselves.
Then, doctors have to place markers on the patient’s body to register the location of the spine. This way, even if the patient moves while breathing, Vizor can automatically adjust the position of the spine in real time.
Surgeons also need to put markers on standard surgical tools. After a calibration process, Vizor can precisely display the orientation of the tools during the operation. According to Augmedics, it takes 10-20 seconds to calibrate the tools. The device also lets you visualize the implants, such as screws.
Elimelech says that the overall system accuracy is about 1.4mm. The FDA requires a level of accuracy below 2mm.
Remarkable, but hard to explain in words. Watch the video.
From Autonomous Checkout, Real Time System v0.21 – YouTube
This is a real time demonstration of our autonomous checkout system, running at 30 FPS. This system includes our models for person detection, entity tracking, item detection, item classification, ownership resolution, action analysis, and shopper inventory analysis, all working together to visualize which person has what item in real time.
A few days ago, I shared a TED Talk about real-time face recognition. It was impressive. What I am sharing right now is even more impressive: real-time people and object recognition during online shopping.
Online shopping is just one (very lucrative) application. The technology shown in this video has been developed by a company called Standard Cognition, but it’s very likely similar to the one that Amazon is testing in their first retail shop.
Of course, there are many other applications, like surveillance for law enforcement, or information gathering for “smart communication”. Imagine this technology used in augmented reality.
Once smart contact lenses will be a reality, this will be inevitable.
From Red Dot Award Winning Tools: Univet 5.0 Augmented Reality Safety Glasses – Core77
Italian safety equipment manufacturer Univet just received a 2017 Red Dot Award for its augmented reality safety glasses. The glasses integrate Sony’s “holographic waveguide technology” into eye protection that allows wearers to view real time data without looking up from what they are doing.
A monocular projection system displays data on a holographic screen behind the right protective lens. The screen is clear so the wearer can see through it.
The glasses can use WiFi or Bluetooth to access data on computers, tables, and smartphones. Information is able to travel in both directions so data can be collected from internal sensors such as the GPS and microphone or optional sensors such as thermometers and cameras.
Take a look at the pictures and videos.
From Omega Ophthalmics is an eye implant platform with the power of continuous AR | TechCrunch
… lens implants aren’t a new thing. Implanted lenses are commonly used as a solve for cataracts and other degenerative diseases mostly affecting senior citizens; about 3.6 million patients in the U.S. get some sort of procedure for the disease every year.
Cataract surgery involves removal of the cloudy lens and replacing it with a thin artificial type of lens. Co-founder and board-certified ophthalmologist Gary Wortz saw an opportunity here to offer not just a lens but a platform to which other manufacturers could add different interactive sensors, drug delivery devices and the inclusion of AR/VR integration.
Maybe there’s a surprisingly large audience among the over 60 that is willing to try and get a second youth through biohacking. Maybe over 60s will become the first true augmented humans.
From Smart Contact Lenses – How Far Away Are They? – Nanalyze
The idea of smart contact lenses isn’t as far away as you might think. The first problem that crops up is how exactly do we power the electronics in a set of “smart” contact lenses. As it turns out, we can use the energy of motion or kinetic energy. Every time the eye blinks, we get some power. Now that we have the power problem solved, there are at least several applications we can think of in order of easiest first:
- Level 1 – Multifocal contact lenses like these from Visioneering Technologies, Inc. (VTI) or curing color blindness like these smart contact lenses called Colormax
- Level 2 – Gathering information from your body – like glucose monitoring for diabetics
- Level 3 – Augmenting your vision with digital overlay
- Level 4 – Complete virtual reality (not sure if this is possible based on the eye symmetry but we can dream a dream)
So when we ask the question “how far away are we from having smart contact lenses” the answer isn’t that simple. The first level we have already achieved.
From Google Glass 2.0 Is a Startling Second Act | WIRED
Companies testing EE—including giants like GE, Boeing, DHL, and Volkswagen—have measured huge gains in productivity and noticeable improvements in quality. What started as pilot projects are now morphing into plans for widespread adoption in these corporations. Other businesses, like medical practices, are introducing Enterprise Edition in their workplaces to transform previously cumbersome tasks.
For starters, it makes the technology completely accessible for those who wear prescription lenses. The camera button, which sits at the hinge of the frame, does double duty as a release switch to remove the electronics part of unit (called the Glass Pod) from the frame. You can then connect it to safety glasses for the factory floor—EE now offers OSHA-certified safety shields—or frames that look like regular eyewear. (A former division of 3M has been manufacturing these specially for Enterprise Edition; if EE catches on, one might expect other frame vendors, from Warby Parker to Ray-Ban, to develop their own versions.)
Other improvements include beefed-up networking—not only faster and more reliable wifi, but also adherence to more rigorous security standards—and a faster processor as well. The battery life has been extended—essential for those who want to work through a complete eight-hour shift without recharging. (More intense usage, like constant streaming, still calls for an external battery.) The camera was upgraded from five megapixels to eight. And for the first time, a red light goes on when video is being recorded.
If Glass EE gains traction, and I believe so if it evolves into a platform for enterprise apps, Google will gain a huge amount of information and experience that can reuse on the AR contact lenses currently in the work.
From WaveOptics raises $15.5 million for augmented reality displays | VentureBeat
While a number of major manufacturers are building the full AR systems (including the optics, sensors, camera, and head-mounted unit), WaveOptics is focused on developing the underlying optics to deliver an enhanced AR experience.
The core of the WaveOptics technology is a waveguide that is able to channel light input from a micro display positioned at the periphery of a lens made of glass — or in the future, plastic. Unlike conventional technologies that rely on cumbersome prisms, mirrors, or scarce materials, WaveOptics’ optical design harnesses waveguide hologram physics and photonic crystals, which enable lightweight design with good optical performance, the company said.
From Will HoloLens turn air travelers into mixed-reality characters? – GeekWire
Imagine a world where headset-wearing flight attendants can instantly know how you’re feeling based on a computer analysis of your facial expression.
Actually, you don’t need to imagine: That world is already in beta, thanks to Air New Zealand, Dimension Data and Microsoft HoloLens.
In May, the airline announced that it was testing HoloLens’ mixed-reality system as a tool for keeping track of passengers’ preferences in flight – for example, their favorite drink and preferred menu items. And if the imaging system picked up the telltale furrowed brow of an anxious flier, that could be noted in an annotation displayed to the flight attendant through the headset.
Google already failed at this. The only places where AR glasses would be socially accepted are those ones where personnel with equipment is the norm.
It would take years, if not decades, for people to accept the idea that flight attendants must have a special equipment to serve drinks.
From Google Lens offers a snapshot of the future for augmented reality and AI | AndroidAuthority
At the recent I/0 2017, Google stated that we were at an inflexion point with vision. In other words, it’s now more possible than ever before for a computer to look at a scene and dig out the details and understand what’s going on. Hence: Google Lens.This improvement comes courtesy of machine learning, which allows companies like Google to acquire huge amounts of data and then create systems that utilize that data in useful ways. This is the same technology underlying voice assistants and even your recommendations on Spotify to a lesser extent.