Matthew Griffin, award winning Futurist and Founder of the 311 Institute is described as "The Adviser behind the Advisers." Recognised for the past five years as one of the world's foremost futurists, innovation and strategy experts Matthew is an author, entrepreneur international speaker who helps investors, multi-nationals, regulators and sovereign governments around the world envision, build and lead the future. Today, asides from being a member of Centrica's prestigious Technology and Innovation Committee and mentoring XPrize teams, Matthew's accomplishments, among others, include playing the lead role in helping the world's largest smartphone manufacturers ideate the next five generations of mobile devices, and what comes beyond, and helping the world's largest high tech semiconductor manufacturers envision the next twenty years of intelligent machines. Matthew's clients include Accenture, Bain & Co, Bank of America, Blackrock, Bloomberg, Booz Allen Hamilton, Boston Consulting Group, Dell EMC, Dentons, Deloitte, Deutsche Bank, Du Pont, E&Y, Fidelity, Goldman Sachs, HPE, Huawei, JP Morgan Chase, KPMG, Lloyds Banking Group, McKinsey & Co, Monsanto, PWC, Qualcomm, Rolls Royce, SAP, Samsung, Schroeder's, Sequoia Capital, Sopra Steria, UBS, the UK's HM Treasury, the USAF and many others.
WHY THIS MATTERS IN BRIEF
As AI becomes increasingly adept at helping unlock the secrets of the brain it’s also becoming more adept at helping decode and visualise complex thoughts onto TV’s and other devices.
The capabilities and accuracy of Artificial Intelligence (AI) “mind reading” systems, as they are increasing being called, are improving faster than ever. Over the past few years they’ve been increasingly used and developed to help decode and then visualise, by putting on the small screen, a wide range of things, everything from pulling images, movies and other content, out of people’s heads, to helping patients with ALS, or “Locked in Syndrome” communicate with loved ones. Now though a team from Carnegie Mellon University (CMU) has developed an AI that can accurately read other complex concepts from just a brain scan, and even piece together entire sentences as they’re being thought.
Even the most basic sentence is loaded with more information than you might realise, every word represents a new “concept,” and their placement and relationship to each other can drastically change the meaning of the whole sentence.
During their research the CMU team found that the “building blocks” the mind uses to construct thoughts are made up of multiple concepts, rather than being based on simple words themselves, and that suggested to them that the brain processes concepts in a universal way, regardless of a person’s language and culture.
“One of the big advances of the human brain was the ability to combine individual concepts into complex thoughts, to think not just of ‘bananas,’ but ‘I like to eat bananas in evening with my friends’,” says Marcel Just, lead researcher on the study, “now we have finally developed a way to see thoughts of that complexity in the fMRI signal. The discovery of this correspondence between thoughts and brain activation patterns tells us what the [individual] thoughts are built of.”
The study tested how the brain codes complex thoughts, and how an fMRI scanner, which detects minute changes in the blood flow within a person’s brain as they think, with help from machine learning algorithms, can decode them.
The researchers put together 240 “complex events,” which were sentences like “The witness shouted during the trial,” and each of these events were made up of 42 different building blocks, or meaningful components like person, setting, size, social interaction and physical action.
Each of these different kinds of information are processed in different parts of the brain, and CMU’s new AI was able to pick out the general category of what was on a person’s mind. To test its prowess, the researchers had seven participants read the sentences, recording the brain activation patterns that went along with them every time, and after training the algorithm on 239 of the sentences, and the matching scans, it was then able to put together the last sentence based solely on the brain scan data.
Then the team ran the test 240 times, systematically leaving out each of the sentences in turn, and found that the AI was able to predict the missing sentence from a brain activation pattern with 87 percent accuracy. Going the other way, the researchers could feed the program a sentence and it would spit out an accurate brain activation pattern.
“Our method overcomes the unfortunate property of fMRI to smear together the signals emanating from brain events that occur close together in time, like the reading of two successive words in a sentence,” says Just, “this advance makes it possible for the first time to decode thoughts containing several concepts. That’s what most human thoughts are composed of. A next step might be to decode the general type of topic a person is thinking about, such as geology or skateboarding. We are on the way to making a map of all the types of knowledge in the brain.”
The research was published in the journal Human Brain Mapping.