WHY THIS MATTERS IN BRIEF
Technologies such as BMI and AI are getting even better at reading, or sensing, your thoughts and translating them into images, sound, text, and video opening up new possibilities.
Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trends, connect, watch a keynote, or browse my blog.
Meta’s AI system captures thousands of brain activity measurements per second and then reconstructs how images are perceived and processed in our minds, according to a new research paper published by the company.
“Overall, these results provide an important step towards the decoding – in real time – of the visual processes continuously unfolding within the human brain,” the report said.
The technique leverages MagnetoEncephalography (MEG) to provide a real-time visual representation of thoughts.
MEG is a non-invasive neuroimaging technique that measures the magnetic fields produced by neuronal activity in the brain. By capturing these magnetic signals, MEG provides a window into brain function, allowing researchers to study and map brain activity with high temporal resolution.
The AI system consists of three main components. Firstly there’s the Image Encoder which creates a set of representations of an image, independent of the brain – it essentially breaks down the image into a format that the AI can understand and process.
Secondly is the Brain Encoder. This part aligns MEG signals to the image embeddings created by the Image Encoder. It acts as a bridge, connecting the brain’s activity with the image’s representation.
Then thirdly there’s the Image Decoder, the final component generates a plausible image based on the brain representations. It takes the processed information and reconstructs an image that mirrors the original thought.
Meta’s latest innovation isn’t the only recent advancement in the realm of mind-reading AI. As recently reported a recent study led by the University of California Berkeley showcased the ability of AI to recreate music by scanning brain activity, and elsewhere we’ve seen AI streaming video from people’s minds.
In the former experiment, participants thought about Pink Floyd’s “Another Brick in the Wall,” and the AI was able to generate audio resembling the song using only data from the brain.
Furthermore, advancements in AI and neurotechnology have led to life-changing applications for individuals with physical disabilities. A recent report highlighted a medical team’s success in implanting microchips in a quadriplegic man’s brain. Using AI, they were able to “relink” his brain to his body and spinal cord, restoring sensation and movement and letting him drive a car again. Such breakthroughs hint at the transformative potential of AI in healthcare and rehabilitation.
It’s essential to approach such advancements with a balanced perspective, however. The Meta researchers noted that while the MEG decoder is swift, it’s not always precise in image generation. The images it produces represent only higher-level characteristics of the perceived image, such as object categories, but might falter in detailing specifics.
The implications of this technology are profound. Beyond its immediate applications, understanding the foundations of human intelligence and developing AI systems that think like us and that can also read our thoughts could redefine our relationship with technology – again.
“The rapid advances of this technology raise several ethical considerations, and most notably, the necessity to preserve mental privacy,” the researchers warned. Ultimately, while AI can now paint our thoughts, it’s up to us to ensure the canvas remains our own.