Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the thegem domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/j8p72agj2cgw/fanaticalfuturist.com/wp-includes/functions.php on line 6121

Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the wp-2fa domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/j8p72agj2cgw/fanaticalfuturist.com/wp-includes/functions.php on line 6121
These sonar enabled smart glasses can listen in on silent voice commands – Matthew Griffin | Keynote Speaker & Master Futurist
Scroll Top

These sonar enabled smart glasses can listen in on silent voice commands

Futurist_silenttech

WHY THIS MATTERS IN BRIEF

Some people can’t speak and others don’t want to speak out loud in certain situations, but now your silent words can be heard.

 

Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trendsconnect, watch a keynote, or browse my blog.

Some people lack the power of speech, others find themselves in noisy settings where speaking voice commands out loud just doesn’t work, and others just don’t want the people around them to hear whatever it is they’re saying. And while all these scenarios have been problematic for privacy conscious people in the past as we see more gadgets that let people talk silently to whoever it is on the other end of the phone now another solutions emerged in the form of EchoSpeech glasses which read their user’s silently spoken words.

 

RELATED
DARPA's unveils its revolutionary CRANE X-Plane with no moving parts

 

The experimental eyewear is being developed by a team at Cornell University’s Smart Computer Interfaces for Future Interactions (SciFi) Lab.

Two downwards-facing miniature speakers are mounted on the underside of the frame beneath one lens, while two mini microphones are located beneath the other. The speakers emit inaudible sound waves, which are reflected off the wearer’s moving mouth and back up to the mics like sonar.

Those echoes are then analyzed in real time by a deep learning algorithm on a wirelessly linked smartphone. That algorithm was trained to associate specific echoes with specific mouth movements, which are in turn associated with specific silently spoken commands.

 

RELATED
Your smartphone can now use your voice to predict heart attacks

 

EchoSpeech is currently capable of recognizing 31 such commands with about 95% accuracy, and only requires a few minutes of training for each user. And importantly for people with privacy concerns, the system doesn’t incorporate any cameras, nor does it send any information to the internet.

What’s more, because it doesn’t utilize a power-hungry camera, it can run for up to 10 hours on one charge of its battery. By contrast, the researchers claim that experimental camera-based systems are only good for about 30 minutes of use per charge.

The university is now working on commercializing the technology.

“For people who cannot vocalize sound, this silent speech technology could be an excellent input for a voice synthesizer,” said doctoral student Ruidong Zhang, who is leading the study. “It could give patients their voices back.”

 

RELATED
Scientists have found a way to break down Forever Chemicals

 

The SciFi Lab previously developed a somewhat similar system called EarIO, which uses a sonar-equipped ear-worn device to capture the wearer’s facial expressions – although it’s utilized mainly to create digital avatars. That said, the University at Buffalo’s EarCommand system does read silently spoken words via an earbud which detects distinctive ear canal deformations produced by specific mouth movements.

EchoSpeech is demonstrated in the video above.

Source: Cornell University

Related Posts

Leave a comment

Pin It on Pinterest

Share This