Digital voicing to predict words and generate synthetic speech

Digital voicing to predict words and generate synthetic speech
November 24, 2020
From UC Berkeley researchers detect ‘silent speech’ with electrodes and AI on VentureBeat:

UC Berkeley researchers say they are the first to train AI using silently mouthed words and sensors that collect muscle activity. Silent speech is detected using electromyography (EMG), with electrodes placed on the face and throat. The model focuses on what researchers call digital voicing to predict words and generate synthetic speech.

and

“Digitally voicing silent speech has a wide array of potential applications,” the team’s paper reads. “For example, it could be used to create a device analogous to a Bluetooth headset that allows people to carry on phone conversations without disrupting those around them. Such a device could also be useful in settings where the environment is too loud to capture audible speech or where maintaining silence is important.”