Scroll Top

Meta now has an AI that can read your mind and draw your thoughts


Technologies such as BMI and AI are getting even better at reading, or sensing, your thoughts and translating them into images, sound, text, and video opening up new possibilities.


Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trendsconnect, watch a keynote, or browse my blog.

After years of hard work and development Meta has finally unveiled a ground breaking Artificial Intelligence (AI) system that can almost instantaneously decode visual representations in the brain.

Meta’s AI system captures thousands of brain activity measurements per second and then reconstructs how images are perceived and processed in our minds, according to a new research paper published by the company.


An AI designed 30,000 drugs in 21 days and came up with winners


“Overall, these results provide an important step towards the decoding – in real time – of the visual processes continuously unfolding within the human brain,” the report said.

The technique leverages MagnetoEncephalography (MEG) to provide a real-time visual representation of thoughts.

MEG is a non-invasive neuroimaging technique that measures the magnetic fields produced by neuronal activity in the brain. By capturing these magnetic signals, MEG provides a window into brain function, allowing researchers to study and map brain activity with high temporal resolution.

The AI system consists of three main components. Firstly there’s the Image Encoder which creates a set of representations of an image, independent of the brain – it essentially breaks down the image into a format that the AI can understand and process.


China debuts the world's first AI news anchor


Secondly is the Brain Encoder. This part aligns MEG signals to the image embeddings created by the Image Encoder. It acts as a bridge, connecting the brain’s activity with the image’s representation.

Then thirdly there’s the Image Decoder, the final component generates a plausible image based on the brain representations. It takes the processed information and reconstructs an image that mirrors the original thought.

Meta’s latest innovation isn’t the only recent advancement in the realm of mind-reading AI. As recently reported a recent study led by the University of California Berkeley showcased the ability of AI to recreate music by scanning brain activity, and elsewhere we’ve seen AI streaming video from people’s minds.


Walmart improves public safety by putting pork on the blockchain


In the former experiment, participants thought about Pink Floyd’s “Another Brick in the Wall,” and the AI was able to generate audio resembling the song using only data from the brain.

Furthermore, advancements in AI and neurotechnology have led to life-changing applications for individuals with physical disabilities. A recent report highlighted a medical team’s success in implanting microchips in a quadriplegic man’s brain. Using AI, they were able to “relink” his brain to his body and spinal cord, restoring sensation and movement and letting him drive a car again. Such breakthroughs hint at the transformative potential of AI in healthcare and rehabilitation.


GPT3 helps transform chemical research


The potential applications of such technology are vast, from enhancing Virtual Reality (VR) experiences to potentially aiding those who have lost their ability to speak due to brain injuries.

It’s essential to approach such advancements with a balanced perspective, however. The Meta researchers noted that while the MEG decoder is swift, it’s not always precise in image generation. The images it produces represent only higher-level characteristics of the perceived image, such as object categories, but might falter in detailing specifics.

The implications of this technology are profound. Beyond its immediate applications, understanding the foundations of human intelligence and developing AI systems that think like us and that can also read our thoughts could redefine our relationship with technology – again.


Researchers warn we could run out of data to train AI's by 2026


“The rapid advances of this technology raise several ethical considerations, and most notably, the necessity to preserve mental privacy,” the researchers warned. Ultimately, while AI can now paint our thoughts, it’s up to us to ensure the canvas remains our own.

Related Posts

Leave a comment


1000's of articles about the exponential future, 1000's of pages of insights, 1000's of videos, and 100's of exponential technologies: Get The Email from 311, your no-nonsense briefing on all the biggest stories in exponential technology and science.

You have Successfully Subscribed!

Pin It on Pinterest

Share This