Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, Bloomberg, CNBC, Discovery, RT, Viacom, and WIRED, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Aon, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.
WHY THIS MATTERS IN BRIEF
In the future what’s in your head won’t stay in your head as companies develop new, powerful Brain Machine Interface devices, and that’s good and bad.
Back in 2017, Facebook announced that it wanted to create a “telepathic headband” that would let people type at a speed of 100 words per minute using just their thoughts, and as a result Mark Zuckerberg hired Regina Duggan, the ex CEO of DARPA, the US Military’s bleeding edge research arm, to develop a non-invasive Brain Machine Interface (BMI) device that could interpret brain signals using just light – using a technology called optogenetics, in order to realise that goal, and accomplish his goal of turning Facebook into the “world’s first telepathic social network” as he put it.
Now, a little over two years later, the social-media giant has revealed that it has been financing “extensive university research” on human volunteers and that they’ve made some significant breakthroughs.
This week, some of that research was described in a scientific paper from the University of California, San Francisco, where researchers have been developing “speech decoders” able to determine what people are trying to say by analysing their brain signals. The research is important for a number of reasons – firstly because it could help show whether a non-invasive wearable brain-control device is feasible, and secondly because it’s an early example of a giant tech company being involved in trying to get hold of data directly from people’s minds – something that should freak even the most chilled people out.
To some neuro-ethicists, that means we are going to need some rules, and fast, about how brain data is collected, stored, and used – something I discussed a little while ago, and something that’s an increasing concern as more companies, such as Elon Musk’s Neuralink, develop more powerful and capable BMI devices and neuro-prosthetics that can do everything from decode and translate human thoughts to boost, decode and download, edit, erase, stream, and upload knowledge and memories to and from the human brain with increasing accuracy and speed.
In the report published in Nature Communications, UCSF researchers led by neuroscientist Edward Chang used sheets of electrodes, called ECoG arrays, that were placed directly on the brains of volunteers.
The scientists were able to listen in in real time as three subjects heard questions read from a list and spoke simple answers. One question was “From 0 to 10, how much pain are you in?” The system was able to detect both the question and the response of 0 to 10 far better than chance with a 61 percent accuracy.
Another question asked was which musical instrument they preferred, and the volunteers were able to answer “piano” and “violin.” The volunteers were undergoing brain surgery for epilepsy. Facebook says the research project is ongoing, and that is it now funding UCSF in efforts to try to restore the ability to communicate to a disabled person with a speech impairment.
Eventually, Facebook wants to create a wearable headset that lets users control music or interact in virtual reality using their thoughts.
To that end, Facebook has also been funding work on systems that listen in on the brain from outside the skull, using fiber optics or lasers to measure changes in blood flow, similar to an fMRI machine.
Such blood-flow patterns represent only a small part of what’s going on in the brain, but they could be enough to distinguish between a limited set of commands.
“Being able to recognize even a handful of imagined commands, like ‘home,’ ‘select,’ and ‘delete,’ would provide entirely new ways of interacting with today’s VR systems – and tomorrow’s AR glasses,” Facebook wrote in a blog post.
Facebook has plans to demonstrate a prototype portable system by the end of the year, although the company didn’t say what it would be capable of, or how it would measure the brain.
Research on brain-computer interfaces has been speeding up as rich tech companies jump in. In July, Neuralink said it would use an autonomous surgical robot to implant electrodes into the brains of paralyzed volunteers within two years.
However, the public has reason to doubt whether tech companies can be trusted with a window into their brains. Last month, for example, Facebook was hit with a record $5 billion fine for deceiving customers about how their personal information gets used.
“To me the brain is the one safe place for freedom of thought, of fantasies, and for dissent,” says Nita Farahany, a professor at Duke University who specializes in neuro-ethics. “We’re getting close to crossing the final frontier of privacy in the absence of any protections whatsoever.”
Facebook emphasizes that all the brain data collected at UCSF will stay at the university, but Facebook employees are able to go there to study it.
It’s not known how much money Facebook is providing the university nor how much volunteers know about the company’s role. A university spokesman, Nicholas Weiler, declined to provide a copy of the research contract or the consent forms signed by patients. He said the consent forms list Facebook among several potential sponsors of the research.
While a brain reader could be a convenient way to control devices, it would also mean Facebook would be hearing brain signals that could, in theory, give it much more information, like how people are reacting to posts and updates.
“Brain data is information-rich and privacy sensitive, it’s a reasonable concern,” says Marcello Ienca, a brain-interface researcher at ETH in Zurich. “Privacy policies implemented at Facebook are clearly insufficient.”
Facebook says it will do better with brain data. “We take privacy very seriously,” says Mark Chevillet, who leads the brain reading project at Facebook.
And, after all, who can disagree with that statement – after all their track record so far on data and user privacy has been exemplary. Hahahahahahahahahaha.