The way we use and interact with devices is changing, increasingly we’ll be using our voices to control, interact and manage them.


The butt of jokes as little as 10 years ago, automatic speech recognition is now on the verge of becoming people’s chief means of interacting with the computers and devices around them. After all, did you really think, for example, that you were going to use a keyboard and mouse to control and interact with your smartwatch, or your self-driving car? Uh-uh.


This nano-membrane turns your skin into a loudspeaker


In anticipation of the age of voice controlled electronics, MIT researchers have built what’s considered by many to be the world’s most efficient low power chip that’s specialised for automatic speech recognition.

Whereas a cell phone running speech recognition software might require about 1 watt of power, the new chip requires between 0.2 and 10 milliwatts, depending on the number of words it has to recognize and that’s a game changer.

In the real world that’ll likely translate into power savings of between 90 to 99 percent, which could make voice control a reality for many relatively simple electronic devices, including power constrained devices that have to harvest energy from their environments – or go months between battery charges – many of which form the backbone of the Internet of Things (IoT) which could include everything from appliances all the way through to city infrastructure, and even cows. Yes. Cows – they’re sensor packed and internet connected too you know.

Please try to keep up, we’re changing the world here. So next time you’re talking to your watch, kettle, or connected cow think of this article, and of course the people at MIT.


Scientists have turned light into a frictionless fluid that acts like water


“Speech input will become a natural interface for many wearable applications and intelligent devices,” says Anantha Chandrakasan, a professor of electrical engineering and computer science at MIT, whose group developed the new chip, “the miniaturization of these devices will require a different interface than touch or keyboard. It will be critical to embed the speech functionality locally to save system energy consumption compared to performing this operation in the cloud.”

“I don’t think that we really developed this technology for a particular application,” adds Michael Price, who led the design of the chip, “we’ve tried to put the infrastructure in place to provide better trade offs to a system designer than they would have had with previous technology, whether it was software or hardware acceleration.”

Today, the best performing speech recognition systems are, like many other state-of-the-art artificial intelligence (AI) systems, based on neural networks – virtual networks of simple information processing systems modelled on the human brain. As a consequence most of the new chip’s architecture is concerned with making this speech recognition neural network as efficient as possible.

But even the most power efficient speech recognition system would quickly drain a device’s battery if it ran without interruption, so the chip also includes a simpler “voice activity detection” circuit that monitors ambient noise to determine whether it might be speech. If the answer is yes, the chip fires up the larger, more complex speech recognition circuit.


Governments mull replacing citizens wages with an Universal Basic Income


In fact, for experimental purposes, the researchers’ chip had three different voice activity detection circuits, with different degrees of complexity and, consequently, different power demands. Which circuit is most power efficient depends on the context, but in tests simulating a wide range of conditions, the most complex of the three circuits led to the greatest power savings for the system as a whole. Even though it consumed almost three times as much power as the simplest circuit, it generated far fewer false positives; the simpler circuits often chewed through their energy savings by spuriously activating the rest of the chip.

A typical neural network consists of thousands of processing “nodes” capable of only simple computations but densely connected to each other. In the type of network commonly used for voice recognition, the nodes are arranged into layers. Voice data are fed into the bottom layer of the network, whose nodes process and pass them to the nodes of the next layer, whose nodes process and pass them to the next layer, and so on. The output of the top layer indicates the probability that the voice data represents a particular speech sound.

A voice recognition network is too big to fit in a chip’s on board memory, which is a problem because going “off chip” for data is much more energy intensive than retrieving it from local stores. So the MIT researchers’ design concentrates on minimizing the amount of data that the chip has to retrieve from off chip memory.


New polymer tears up anti-biotic resistant bacteria from the inside out


A node in the middle of a neural network might receive data from a dozen other nodes and transmit data to another dozen. Each of those two dozen connections has an associated “weight,” a number that indicates how prominently data sent across it should factor into the receiving node’s computations. The first step in minimizing the new chip’s memory bandwidth is to compress the weights associated with each node. The data are decompressed only after they’re brought on-chip.

The chip also exploits the fact that, with speech recognition, wave upon wave of data must pass through the neural network. The incoming audio signal is split up into 10-millisecond increments, each of which must be evaluated separately. The MIT researchers’ chip brings in a single node of the neural network at a time, but it passes the data from 32 consecutive 10-millisecond increments through it.

If a node has a dozen outputs, then the 32 passes result in 384 output values, which the chip stores locally. Each of those must be coupled with 11 other values when fed to the next layer of nodes, and so on. So the chip ends up requiring a sizable on board memory circuit for its intermediate computations. But it fetches only one compressed node from off chip memory at a time, keeping its power requirements low.


Elon Musk announced SpaceX is sending two private citizens to the Moon


The research was funded through the Qmulus Project, a joint venture between MIT and Quanta Computer, the OEM server manufacturer that supplies the majority of the hyperscale datacentre companies, like Facebook and Google, with their cloud server systems, and the chip was prototyped by the Taiwan Semiconductor Manufacturing Company.

About author

Matthew Griffin

Matthew Griffin, award winning Futurist working between the dates of 2020 and 2070, is described as “The Adviser behind the Advisers” and a “Young Kurzweil.” Regularly featured in the global press, including BBC, CNBC, Discovery and RT, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew sits on several boards and his recent work includes mentoring Lunar XPrize teams, building the first generation of biological computers and re-envisioning global education with the G20, and helping the world’s largest manufacturers ideate the next 20 years of intelligent devices and machines. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Bain & Co, BCG, BOA, Blackrock, Bentley, Credit Suisse, Dell EMC, Dentons, Deloitte, Du Pont, E&Y, HPE, Huawei, JPMorgan Chase, KPMG, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, UBS, and many more.


Your email address will not be published. Required fields are marked *