Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, CNBC, Discovery, RT, and Viacom, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.
WHY THIS MATTERS IN BRIEF
Being able to read people’s minds, and visualise what they’re thinking and their memories on a screen will revolutionise almost every part of society, and that’s before we start talking about telepathy and futuristic brain downloading.
It already seems a little like computers can read our minds, features like Google’s Auto-Complete, Facebook’s friend suggestions, and the targeted ads that appear while you’re browsing the web sometimes make you wonder how they knew, and, for better or worse, it seems we’re slowly but surely moving in the direction of computers reading our minds for real.
However, while you’ll probably be glad to hear we’re still a way off from realising that future in earnest, which, no doubt, will spark some interesting privacy debates, over the past few years there have been some notable advances in the field, and the results are only getting better.
Reconstructing Geometric Images From People’s Thoughts
Past experiments and results have included, among others, University of California Berkeley’s ability to stream movies from people’s brains, Texas Tech University’s ability to pull secrets from people’s brains, and, of course, not to be left out, the University of Wisconsin’s experiment to stream people’s dreams while they were in REM sleep with an 84 percent accuracy rate. And notice I haven’t mentioned anything about our ability to communicate telepathically or upload knowledge to our brains, something I’ve written about in other articles. Neuroscience is getting interesting…
Reconstructing Complex Images From People’s Thoughts
Now though, building on yet another mind reading experiment, this time from the University of Oregon who a little while ago managed to pull images of people out of people’s heads, a team from Japan have announced that they’ve managed to go one step better and reconstruct and stream images from people’s brains in higher resolution than previous attempts.
Using more modern equipment that these “early” experiments the team from Kyoto University, led by Yukiyasu Kamitani, want to create the “ultimate in mind reading and visualisation tech.” And one day they hope it’ll work both ways – helping people pull images and thoughts from your head, as well as “putting them back in.” Make of that what you will…
While previous experiments in the space have used a combination of basic Artificial Intelligence (AI), Brain Machine Interfaces (BMI) and fMRI, to develop a technique they called Binary Pixel Deconstruction, basically the production of grainy, pixelated images from people’s brainwaves, the Japanese team used more advanced technologies to create a technique known as Deep Image Reconstruction (DIR). The main difference between the two is the quality of the mind reading and the quality of the end images.
The Japanese teams new approach involved using a Generative Adversarial Network (GAN), the same type that’s being used to build creative machines that can innovate products, a form of AI, that was combined with a Deep Neural Network (DNN) to produce natural looking images and they published details of their project, which is waiting to be peer reviewed, titled “Deep image reconstruction from human brain activity,” at the end of December last year.
The team conducted their research over a 10 month period and started off by creating three categories of images for their volunteers to view. These included animals and people, geometric shapes, and letters.
The data collected from analysing their volunteers brain activity, both during and after they viewed the images, using the DGN, was then run through the DNN to create and fine tune the images the volunteers were seeing, and while the images are still far from perfect some of them are in the ballpark. For example, the re-constructed image with the Swan in it is white, and the one with the duck clearly has a duck shaped outline to it, not to mention a lot of the other images that also, if you squint, look somewhat like their originals. And for any cynics out there, remember, this is all small steps here, this is neuroscience at, if not beyond, the bleeding edge, and the system is learning and fine tuning itself all the time, and we’ve all seen how fast AI can go from being taught to, for example, teaching itself faster than any human can in just under a year.
“We have been studying methods to reconstruct or recreate an image a person is seeing just by looking at the person’s brain activity. Our previous method was to assume that an image consists of pixels or simple shapes. But it’s known that our brain processes visual information hierarchically extracting different levels of features or components of different complexities, so we used that to tailor our approach,” said Kamitani.
Although Kamitani is quick to acknowledge that a lot more work needs to be done to perfect the technology he adds that once it is perfected it could potentially revolutionise the area of visualisation technology and BMI devices, from product placements to using the methods towards improving the level of care offered to psychiatric patients through visualisations of their hallucinations.
So while we might still be some way away from being able to just project our thoughts onto YouTube, heaven forbid, slowly we’re getting closer, and looking at the advances, I don’t think it’ll be too long before this technology really starts cranking up the gears.
Imagine “instant art,” where you could create art just by picturing it in your head. Or what if an AI could record your brain activity as you’re asleep and dreaming, then re-create your dreams so you, or your shrink, can analyse them later? Just last year, completely paralysed patients suffering from ALS were able to communicate with their families for the first time using a BMI, and there are countless creative and significant ways to use a model like the one in the Kyoto study.
That said though BMI’s are also one of those technologies that could have negative implications for society at large, something that Neuroethicists are becoming increasingly vocal about, but that still isn’t deterring others from entering the field, entrepreneurs such as Elon Musk who famously founded Neuralink with the purpose of building a “Neural Lace” that can connect people to computers and “AI,” and elsewhere a company called Kernel is working on making chips that can read and write neural code.
Whether it’s to recreate images, mine our deep subconscious, or give us entirely new capabilities, such as eventually downloading our minds into avatars, computers, the internet, or robots, though, it’s in our best interest that mind reading technology proceeds with caution, but in the meantime does anyone know where I can buy a Magneto helmet?