Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, CNBC, Discovery, RT, and Viacom, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.
WHY THIS MATTERS IN BRIEF
Being able to project images from one person’s brain into another person’s brain is increasingly possible, but it’s still sci-fi.
There’s a theme now of researchers developing both invasive and non-invasive Brain Machine Interfaces (BMI) that will not only let us all talk and play games telepathically, and control F-35 fighter jets, but that will also let us connect to the machines and AI’s in the cloud. And now a Rice University led team of neuro-engineers is embarking on an ambitious four-year project to develop headset technology that can directly link the human brain and machines without the need for surgery. As a first proof of concept the team plans to transmit visual images perceived by one individual into the minds of blind patients – and just the thought of that is mind blowing in itself!
“In four years we hope to demonstrate direct, brain-to-brain communication at the speed of thought and without brain surgery,” said Rice’s Jacob Robinson, the lead investigator on the $18 million project, which was announced today as part of the DARPA Next-Generation Nonsurgical Neurotechnology (N3) program that I’ve talked about before that could also one day take us into the field of telepathic warfare.
Sharing visual images between two brains may sound like science fiction, but Robinson said a number of recent technological breakthroughs make the idea feasible. Just how feasible is the question DARPA hopes to address with a series of N3 awards to the Rice-led team and five others that have proposed different technological solutions for the broader challenge of connecting brains and machines.
“Speed is key,” said Robinson, an associate professor of electrical and computer engineering and of bioengineering in Rice’s Brown School of Engineering. “We have to decode neural activity in one person’s visual cortex and recreate it in another person’s mind in less than one-twentieth of a second. The technology to do that, without surgery, doesn’t yet exist. That’s what we’ll be creating.”
Because surgery is a nonstarter, all the N3 teams plan to use some combination of light, ultrasound or electromagnetic energy to read and write brain activity. Rice’s “magnetic, optical and acoustic neural access device,” or MOANA, will test techniques that employ all three. The MOANA team includes 15 co-investigators from Rice, Baylor College of Medicine, the Jan and Dan Duncan Neurological Research Institute at Texas Children’s Hospital, Duke University, Columbia University and Yale University.
Robinson said a big differentiator between N3-funded teams is how they plan to deal with the 50-millisecond latency threshold as well as DARPA’s requirements for spatial resolution. The agency is seeking devices that can read from and write to a minimum of 16 locations in a volume of the brain about the size of a pea.
Robinson said MOANA’s decoding and encoding technologies will each employ viral vector gene delivery, a technology that’s in clinical trials for treating macular degeneration, as well as some cancers and neurological conditions. Genetic payloads, which differ for decoding and encoding, will be delivered with the help of ultrasound to select groups of neurons in the 16 target areas of the brain.
To “read” neural activity, the MOANA team will reprogram neurons to make synthetic proteins called “calcium-dependent indicators” that are designed to absorb light when a neuron is active, or firing.
Rice co-investigator Ashok Veeraraghavan said red and infrared wavelengths of light can penetrate the skull, and MOANA’s device will utilize this. The optical subsystem will consist of light emitters and detectors that are arrayed around the target area on a skull cap.
“Most of this light scatters off the scalp and skull, but a small fraction can make it into the brain, and this tiny fraction of photons contain information that is critical to decoding a visual perception,” said Veeraraghavan, an associate professor of electrical and computer engineering and of computer science. “Our aim is to capture and interpret the information contained in photons that pass through the skull twice, first on their way to the visual cortex and again after they are reflected back to the detector.”
MOANA’s photodetectors will be both ultrafast and ultrasensitive. The former is important for ignoring light that scatters off the skull and instead capturing only those photons that have had enough time to travel all the way to the target area of the brain and back.
“By utilizing ultrasensitive, single-photon counting detectors, the tiny signal from brain tissue can be selectively sensed,” Veeraraghavan said.
Veeraraghavan, Robinson and MOANA collaborators Kenneth Shepard and Andreas Hielscher from Columbia Engineering plan to use the detectors to develop a technology called “time-of-flight enhanced functional diffuse optical tomography,” or ToFF-DOT. Like a CT scanner, ToFF-DOT constructs a real-time 3D image of what’s inside the body, but whereas a CT scan uses X-rays, ToFF-DOT uses visible light.
Robinson said neurons in the 16 target regions of the visual cortex are expected to show up darker than normal on ToFF-DOT scans when they are firing and their calcium-dependent indicator proteins are absorbing light. Interpreting the dynamic changes from dark to light in the target areas is what MOANA will do to “read” neural activity.
Robinson said three years of work, first in cell cultures and then animals, will precede any work with human patients. But he said the MOANA team will coordinate its efforts with Baylor Department of Neurosurgery’s Daniel Yoshor and Michael Beauchamp, who are conducting clinical trials to restore sight to blind patients using an experimental prosthetic that directly stimulates the visual cortex with surgically implanted electrodes.
“There may be patients who prefer a visual prosthetic that doesn’t require brain surgery,” Robinson said. “If our work in cells and animal models goes well, MOANA could be approved for clinical tests as a nonsurgical alternative. It would require gene therapy, but not brain surgery.”
In the brain receiving an image, MOANA would “write” information to neurons that are reprogrammed to fire in response to magnetic signals. The gene therapy payload delivered to these neurons will create proteins that tether either naturally occurring or synthetic iron nanoparticles to ion channels inside the neurons. The release of calcium through these ion channels is what “fires” a neuron, causing it to actively transmit an electrical impulse.
“We plan to use magnetic fields to heat the iron, which in turn will open the channel and fire the neuron,” Robinson said. “But it’s not enough to do that every second or two. Our system must respond in milliseconds for the receiver and perceiver to experience the perception close enough in time that it seems simultaneous.”
Human thought involves the coordinated firing of many neurons, sometimes in different regions of the brain. Rice co-investigator Caleb Kemere said the quality of communication that can be achieved with 16 channels of information is an open question.
“We know that the circuits of the brain that are involved are very dense,” said Kemere, an associate professor of electrical and computer engineering and of bioengineering who has previously studied neural circuits using invasive technologies. “It’s possible or even likely that early 16-channel demonstrations may deliver somewhat muddied perceptions, but this is an exciting path towards a more noninvasive future. The timing, density and performance of the systems we are developing will be orders of magnitude more sophisticated than anything currently available.”