The human brain is complex, but using technology what you think is increasingly being decoded by AI and streamed to TV’s and other gadgets.


Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trendsconnect, watch a keynote, or browse my blog.

You might not have heard of Functional Magnetic Resonance Imaging, or fMRI machines, but not only are they of the most advanced tools for understanding how we think – by producing mesmerizing and colorful images of peoples brain activity in action – but increasingly they’re the machines behind helping researchers decode and transmit your thoughts – all of them, moods, sight, sound, even the daydreams, memories, and secrets in your mind – directly to the gadgets around you. So they’re not only interesting they’re world changing.


US Army unveils robotics project that lets AI's ask soldiers clarifying questions


Looking at someone’s brain activity this way can tell neuroscientists which brain areas a person is using but not what that individual is thinking, seeing or feeling. Researchers have been trying to crack that code for decades – and now, using Artificial Intelligence (AI) to crunch the numbers, they’ve been making serious progress. Two scientists in Japan recently combined fMRI data with advanced image-generating AI to translate study participants’ brain activity back into pictures that uncannily resembled the ones they viewed during the scans. The original and re-created images can be seen on the researchers’ website.

“We can use these kinds of techniques to build potential brain-machine interfaces,” says Yu Takagi, a neuroscientist at Osaka University in Japan and one of the study’s authors. Such future interfaces could one day help people who currently cannot communicate, such as individuals who outwardly appear unresponsive but may still be conscious. The study was recently accepted to be presented at the 2023 Conference on Computer Vision and Pattern Recognition.

The study has made waves online since it was posted as a preprint (meaning it has not yet been peer-reviewed or published) in December 2022. Online commentators have even compared the technology to “mind reading.” But that description overstates what this technology is capable of, experts say.


Soul Machines reveal the secrets behind BabyX, their life like artificial baby


“I don’t think we’re mind reading,” says Shailee Jain, a computational neuroscientist at the University of Texas at Austin, who was not involved in the new study. “I don’t think the technology is anywhere near to actually being useful for patients – or to being used for bad things – at the moment. But we are getting better, day by day.”

The new study is far from the first that has used AI on brain activity to reconstruct images viewed by people. In a 2019 experiment, researchers in Kyoto, Japan, used a type of machine learning called a deep neural network to reconstruct images from fMRI scans. The results looked more like abstract paintings than photographs, but human judges could still accurately match the AI-made images to the original pictures.

Neuroscientists have since continued this work with newer and better AI image generators. In the recent study, the researchers used Stable Diffusion, a so-called diffusion model from London-based start-up Stability AI. Diffusion models – a category that also includes AI image generators such as DALL-E 2 – are “the main character of the AI explosion,” Takagi says. These models learn by adding noise to their training images. Like TV static, the noise distorts the images – but in predictable ways that the model begins to learn. Eventually the model can build images from the “static” alone.


Check Point says Russian hackers are trying to break into ChatGPT to get up to no good


Released to the public in August 2022, Stable Diffusion has been trained on billions of photographs and their captions. It has learned to recognize patterns in pictures, so it can mix and match visual features on command to generate entirely new images. “You just tell it, right, ‘A dog on a skateboard,’ and then it’ll generate a dog on a skateboard,” says Iris Groen, a neuroscientist at the University of Amsterdam, who was not involved in the new study. The researchers “just took that model, and then they said, ‘Okay, can we now link it up in a smart way to the brain scans?’”

The brain scans used in the new study come from a research database containing the results of an earlier study in which eight participants agreed to regularly lay in an fMRI scanner and view 10,000 images over the course of a year. The result was a huge repository of fMRI data that shows how the vision centers of the human brain (or at least the brains of these eight human participants) respond to seeing each of the images. In the recent study, the researchers used data from four of the original participants.

To generate the reconstructed images, the AI model needs to work with two different types of information: the lower-level visual properties of the image and its higher-level meaning. For example, it’s not just an angular, elongated object against a blue background – it’s an airplane in the sky. The brain also works with these two kinds of information and processes them in different regions. To link the brain scans and the AI together, the researchers used linear models to pair up the parts of each that deal with lower-level visual information. They also did the same with the parts that handle high-level conceptual information.


DeepMind's protein folding AI targets Covid-19 to help find vaccines


“By basically mapping those to each other, they were able to generate these images,” Groen says. The AI model could then learn which subtle patterns in a person’s brain activation correspond to which features of the images. Once the model was able to recognize these patterns, the researchers fed it fMRI data that it had never seen before and tasked it with generating the image to go along with it. Finally, the researchers could compare the generated image to the original to see how well the model performed.

Many of the image pairs the authors showcase in the study look strikingly similar. “What I find exciting about it is that it works,” says Ambuj Singh, a computer scientist at the University of California, Santa Barbara, who was not involved in the study. Still, that doesn’t mean scientists have figured out exactly how the brain processes the visual world, Singh says. The Stable Diffusion model doesn’t necessarily process images in the same way the brain does, even if it’s capable of generating similar results. The authors hope that comparing these models and the brain can shed light on the inner workings of both complex systems.

As fantastical as this technology may sound, it has plenty of limitations. Each model has to be trained on, and use, the data of just one person.


DeepMind's newest AI thrashes human gamers at Stratego


“Everybody’s brain is really different,” says Lynn Le, a computational neuroscientist at Radboud University in the Netherlands, who was not involved in the research. If you wanted to have AI reconstruct images from your brain scans, you would have to train a custom model—and for that, scientists would need troves of high-quality fMRI data from your brain. Unless you consent to laying perfectly still and concentrating on thousands of images inside a clanging, claustrophobic MRI tube, no existing AI model would have enough data to start decoding your brain activity.

Even with those data, AI models are only good at tasks for which they’ve been explicitly trained, Jain explains. A model trained on how you perceive images won’t work for trying to decode what concepts you’re thinking about—though some research teams, including Jain’s, are building other models for that.

It’s still unclear if this technology would work to reconstruct images that participants have only imagined, not viewed with their eyes. That ability would be necessary for many applications of the technology, such as using brain-computer interfaces to help those who cannot speak or gesture to communicate with the world.


Stunning car configurators are easy thanks to Epic's Unreal engine


“There’s a lot to be gained, neuroscientifically, from building decoding technology,” Jain says. But the potential benefits come with potential ethical quandaries, and addressing them will become still more important as these techniques improve. The technology’s current limitations are “not a good enough excuse to take potential harms of decoding lightly,” she says. “I think the time to think about privacy and negative uses of this technology is now, even though we may not be at the stage where that could happen.”

About author

Matthew Griffin

Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, Bloomberg, CNBC, Discovery, RT, Viacom, and WIRED, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Aon, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.

Your email address will not be published. Required fields are marked *