Scroll Top

These sonar enabled smart glasses can listen in on silent voice commands

Futurist_silenttech

WHY THIS MATTERS IN BRIEF

Some people can’t speak and others don’t want to speak out loud in certain situations, but now your silent words can be heard.

 

Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trendsconnect, watch a keynote, or browse my blog.

Some people lack the power of speech, others find themselves in noisy settings where speaking voice commands out loud just doesn’t work, and others just don’t want the people around them to hear whatever it is they’re saying. And while all these scenarios have been problematic for privacy conscious people in the past as we see more gadgets that let people talk silently to whoever it is on the other end of the phone now another solutions emerged in the form of EchoSpeech glasses which read their user’s silently spoken words.

 

RELATED
Researchers say this AI can judge your personality from selfies alone

 

The experimental eyewear is being developed by a team at Cornell University’s Smart Computer Interfaces for Future Interactions (SciFi) Lab.

Two downwards-facing miniature speakers are mounted on the underside of the frame beneath one lens, while two mini microphones are located beneath the other. The speakers emit inaudible sound waves, which are reflected off the wearer’s moving mouth and back up to the mics like sonar.

Those echoes are then analyzed in real time by a deep learning algorithm on a wirelessly linked smartphone. That algorithm was trained to associate specific echoes with specific mouth movements, which are in turn associated with specific silently spoken commands.

 

RELATED
Self assembling tiles that make any kind of space structures ace first orbital tests

 

EchoSpeech is currently capable of recognizing 31 such commands with about 95% accuracy, and only requires a few minutes of training for each user. And importantly for people with privacy concerns, the system doesn’t incorporate any cameras, nor does it send any information to the internet.

What’s more, because it doesn’t utilize a power-hungry camera, it can run for up to 10 hours on one charge of its battery. By contrast, the researchers claim that experimental camera-based systems are only good for about 30 minutes of use per charge.

The university is now working on commercializing the technology.

“For people who cannot vocalize sound, this silent speech technology could be an excellent input for a voice synthesizer,” said doctoral student Ruidong Zhang, who is leading the study. “It could give patients their voices back.”

 

RELATED
Replica Studios makes it easy for games studios to create synthetic voices

 

The SciFi Lab previously developed a somewhat similar system called EarIO, which uses a sonar-equipped ear-worn device to capture the wearer’s facial expressions – although it’s utilized mainly to create digital avatars. That said, the University at Buffalo’s EarCommand system does read silently spoken words via an earbud which detects distinctive ear canal deformations produced by specific mouth movements.

EchoSpeech is demonstrated in the video above.

Source: Cornell University

Related Posts

Leave a comment

EXPLORE MORE!

1000's of articles about the exponential future, 1000's of pages of insights, 1000's of videos, and 100's of exponential technologies: Get The Email from 311, your no-nonsense briefing on all the biggest stories in exponential technology and science.

You have Successfully Subscribed!

Pin It on Pinterest

Share This