Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, Bloomberg, CNBC, Discovery, RT, Viacom, and WIRED, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Aon, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.
WHY THIS MATTERS IN BRIEF
Advances in technology and neuroscience have let us reach the point where streaming your thoughts to TV are no longer just for the movies.
Neural interfaces, like Brain Machine Interfaces (BMI), and Artificial Intelligence (AI), which can now be packed into a small pair of glasses or a smart tattoo, are getting better all the time at decoding people’s brainwaves and translating them into faces, images, sentences, and videos, that can be viewed and streamed – as well as used to make money, literally make money. And now, in a continuation of that trend researchers at the University of Helsinki in Finland have developed a system that can imagine – you heard that right – what a person is thinking and generate corresponding images of it. According to the researchers the new technique can be “used in psychology and cognitive neuroscience, as well as supporting human creativity.” In short, you think of an image, or eventually a video, and the system will render it, one day, in high definition for you – which will frankly be as awesome as it is freaky.
The team have developed a technique in which an AI “models a person’s visual perception by monitoring their brain signals.” In a way, it is as if it tries to imagine what the person is thinking about and then reconstructs it as a synthetic image on the users screen. The study was published in the Scientific Reports.
The researchers call this method Neuroadaptive Generative Modelling, and 31 volunteers participated in a study that evaluated the effectiveness of the technique. Participants were shown hundreds of AI generated images of diverse looking people while their EEG was recorded.
See the tech in action. NB: They are researchers not Youtube Creators! Sorry …
The subjects were then asked to concentrate on certain features, such as faces that looked old or were smiling. While looking at a rapidly presented series of face images, the EEGs of the subjects were fed to a neural network, which inferred whether any image was detected by the brain as matching what the subjects were looking for.
Based on this information, the neural network adapted its estimation as to what kind of faces people were thinking of. Finally, the images generated by the computer were evaluated by the participants and they nearly perfectly matched with the features the participants were thinking of. The accuracy of the experiment was 83 percent.
“The technique combines natural human responses with the computer’s ability to create new information. In the experiment, the participants were only asked to look at the computer-generated images. The computer, in turn, modelled the images displayed and the human reaction toward the images by using human brain responses. From this, the computer can create an entirely new image that matches the user’s intention,” says Tuukka Ruotsalo, Academy of Finland Research Fellow at the University of Helsinki, Finland and Associate Professor at the University of Copenhagen, Denmark.
Generating images of the human face is only one example of the technique’s potential uses. One practical benefit of the study may be that computers can augment human creativity.
“If you want to draw or illustrate something but are unable to do so, the computer may help you to achieve your goal. It could just observe the focus of attention and predict what you would like to create,” Ruotsalo says.
However, the researchers believe that the technique may be used to gain understanding of perception and the underlying processes in our mind.
“The technique does not recognise thoughts but rather responds to the associations we have with mental categories. Thus, while we are not able to find out the identity of a specific ‘old person’ a participant was thinking of, we may gain an understanding of what they associate with old age. We, therefore, believe it may provide a new way of gaining insight into social, cognitive and emotional processes,” says Senior Researcher Michiel Spapé.
According to Spapé, this is also interesting from a psychological perspective.
“One person’s idea of an elderly person may be very different from another’s. We are currently uncovering whether our technique might expose unconscious associations or biases, for example by looking if the computer always renders old people as, say, smiling men.”