410 views
0

WHY THIS MATTERS IN BRIEF

Being able to read someone’s mind leads down two roads. It can either be used as part of a dystopian state apparatus, or to help catch criminals and help people who can’t communicate communicate.

 

It’s frustrating to have a clear mental image of something but not be able to exactly get it across in words or a drawing, and if you’ve ever tried matching the image in your head to the one in Google search then, asides from lots of other exciting and revolutionary possibilities, then this might be the tech for you. Now a team of neuroscientists from the University of Toronto Scarborough (UTS) are coming to your aid – they’ve developed a way to digitally recreate exactly the image someone is thinking about, by scanning their brain.

 

RELATED
Self-evolving killer robots are almost here, and experts want a debate

 

So called Artificial Intelligence (AI) “mind reading” technology is getting eerily accurate. Along with allowing people to control prosthetics with their thoughts, these systems have quickly advanced from being able to picking out what letter you’re thinking of to being able to decode, and visualise on the small screen, more complex thoughts such as dreams, static images, streaming movies and even sentences. In fact it’s all happening so fast that some researchers have proposed the implementation of new human rights regarding how the brain can be read or manipulated.

 

Watch it in action

 

In this case the UTS team’s study was designed to see whether specific images could be plucked out of a person’s mind and to test out the idea they hooked people up to Electroencephalography (EEG) equipment and then showed them pictures of faces on a computer screen. The EEG system recorded their brain waves and after running the data through machine learning algorithms the teams new AI system was able to digitally recreate the face that the test subject had just seen.

“When we see something, our brain creates a mental percept, which is essentially a mental impression of that thing,” says Dan Nemrodov, co-author of the study, “we were able to capture this percept using EEG to get a direct illustration of what’s happening in the brain during this process.”

 

RELATED
MIT trained an AI to spot fake and biased news stories

 

AI brain reading experiments generally involve one of two methods, either EEG, as is the case with this one, or functional Magnetic Resonance Imaging (fMRI). The former measures the electrical activity in the brain using a skull cap full of electrodes, while fMRI on the other hand uses a magnetic field to monitor minute changes in the blood flow in different parts of the brain which change when people think thoughts. Both have their advantages and disadvantages, but EEG, which has also been used to help people communicate with each other telepathically before, is the one that’s more commonly used, less expensive, and can record changes faster.

“fMRI captures activity at the time scale of seconds, but EEG captures activity at the millisecond scale,” says Nemrodov, “so we can see with very fine detail how the percept of a face develops in our brain using EEG.”

That high time accuracy allowed the team to determine that it only takes about 170 milliseconds for the human brain to create a decent mental picture of a face it’s looking at, and in the future the team wants to expand the technique to be able to recreate objects other than faces, and do so over longer periods of time, allowing virtual reconstruction of images that a person remembers seeing more than a few seconds ago.

 

RELATED
Future musicians could be trained by AI

 

“Our new research could provide a means of communication for people who are unable to verbally communicate,” says Adrian Nestor, co-author of the study, “not only could it produce a neural based reconstruction of what a person is thinking about but also of what they remember and imagine, or what they want to express. It could also have forensic uses for law enforcement in gathering eyewitness information on potential suspects rather than relying on verbal descriptions provided to a sketch artist.”

The research was published in the journal eNeuro, and the team demonstrates the technique in the video above.

About author

Matthew Griffin

Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the 311 Institute, a global futures and deep futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future.” Regularly featured in the global media, including AP, BBC, CNBC, Discovery, RT, and Viacom, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Bain & Co, BCG, BOA, Blackrock, Bentley, Credit Suisse, Dell EMC, Dentons, Deloitte, Du Pont, E&Y, GEMS, HPE, Huawei, JPMorgan Chase, KPMG, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, UBS, and many more.

Your email address will not be published. Required fields are marked *