Scroll Top

DARPA human telepathy project will let people transmit images into other people’s brains

futurist_brain_darpa_images

WHY THIS MATTERS IN BRIEF

Being able to project images from one person’s brain into another person’s brain is increasingly possible, but it’s still sci-fi.

 

Interested in the Exponential Future? Connect, download a free E-Book, watch a keynote, or browse my blog.

There’s a theme now of researchers developing both invasive and non-invasive Brain Machine Interfaces (BMI) that will not only let us all talk and play games telepathically, and control F-35 fighter jets, but that will also let us connect to the machines and AI’s in the cloud. And now a Rice University led team of neuro-engineers is embarking on an ambitious four-year project to develop headset technology that can directly link the human brain and machines without the need for surgery. As a first proof of concept the team plans to transmit visual images perceived by one individual into the minds of blind patients – and just the thought of that is mind blowing in itself!

 

RELATED
Michigan to build America's first wireless electric vehicle charging road

 

“In four years we hope to demonstrate direct, brain-to-brain communication at the speed of thought and without brain surgery,” said Rice’s Jacob Robinson, the lead investigator on the $18 million project, which was announced today as part of the DARPA Next-Generation Nonsurgical Neurotechnology (N3) program that I’ve talked about before that could also one day take us into the field of telepathic warfare.

Sharing visual images between two brains may sound like science fiction, but Robinson said a number of recent technological breakthroughs make the idea feasible. Just how feasible is the question DARPA hopes to address with a series of N3 awards to the Rice-led team and five others that have proposed different technological solutions for the broader challenge of connecting brains and machines.

 

RELATED
Animal shaped morphing robots can deliver cancer drugs precisely where they're needed

 

“Speed is key,” said Robinson, an associate professor of electrical and computer engineering and of bioengineering in Rice’s Brown School of Engineering. “We have to decode neural activity in one person’s visual cortex and recreate it in another person’s mind in less than one-twentieth of a second. The technology to do that, without surgery, doesn’t yet exist. That’s what we’ll be creating.”

Because surgery is a nonstarter, all the N3 teams plan to use some combination of light, ultrasound or electromagnetic energy to read and write brain activity. Rice’s “magnetic, optical and acoustic neural access device,” or MOANA, will test techniques that employ all three. The MOANA team includes 15 co-investigators from Rice, Baylor College of Medicine, the Jan and Dan Duncan Neurological Research Institute at Texas Children’s Hospital, Duke University, Columbia University and Yale University.

 

RELATED
One shot remedy for Asthma could arrive soon after trials

 

Robinson said a big differentiator between N3-funded teams is how they plan to deal with the 50-millisecond latency threshold as well as DARPA’s requirements for spatial resolution. The agency is seeking devices that can read from and write to a minimum of 16 locations in a volume of the brain about the size of a pea.

Robinson said MOANA’s decoding and encoding technologies will each employ viral vector gene delivery, a technology that’s in clinical trials for treating macular degeneration, as well as some cancers and neurological conditions. Genetic payloads, which differ for decoding and encoding, will be delivered with the help of ultrasound to select groups of neurons in the 16 target areas of the brain.

 

RELATED
Worlds first designer baby is born in Mexico and has three parents

 

To “read” neural activity, the MOANA team will reprogram neurons to make synthetic proteins called “calcium-dependent indicators” that are designed to absorb light when a neuron is active, or firing.

Rice co-investigator Ashok Veeraraghavan said red and infrared wavelengths of light can penetrate the skull, and MOANA’s device will utilize this. The optical subsystem will consist of light emitters and detectors that are arrayed around the target area on a skull cap.

“Most of this light scatters off the scalp and skull, but a small fraction can make it into the brain, and this tiny fraction of photons contain information that is critical to decoding a visual perception,” said Veeraraghavan, an associate professor of electrical and computer engineering and of computer science. “Our aim is to capture and interpret the information contained in photons that pass through the skull twice, first on their way to the visual cortex and again after they are reflected back to the detector.”

 

RELATED
Researchers are going to grow new organs in patients for the first time

 

MOANA’s photodetectors will be both ultrafast and ultrasensitive. The former is important for ignoring light that scatters off the skull and instead capturing only those photons that have had enough time to travel all the way to the target area of the brain and back.

“By utilizing ultrasensitive, single-photon counting detectors, the tiny signal from brain tissue can be selectively sensed,” Veeraraghavan said.

Veeraraghavan, Robinson and MOANA collaborators Kenneth Shepard and Andreas Hielscher from Columbia Engineering plan to use the detectors to develop a technology called “time-of-flight enhanced functional diffuse optical tomography,” or ToFF-DOT. Like a CT scanner, ToFF-DOT constructs a real-time 3D image of what’s inside the body, but whereas a CT scan uses X-rays, ToFF-DOT uses visible light.

 

RELATED
Researchers create a cyborg rat that can be controlled with human mind control

 

Robinson said neurons in the 16 target regions of the visual cortex are expected to show up darker than normal on ToFF-DOT scans when they are firing and their calcium-dependent indicator proteins are absorbing light. Interpreting the dynamic changes from dark to light in the target areas is what MOANA will do to “read” neural activity.

Robinson said three years of work, first in cell cultures and then animals, will precede any work with human patients. But he said the MOANA team will coordinate its efforts with Baylor Department of Neurosurgery’s Daniel Yoshor and Michael Beauchamp, who are conducting clinical trials to restore sight to blind patients using an experimental prosthetic that directly stimulates the visual cortex with surgically implanted electrodes.

 

RELATED
A new blood test can tell what time it is in your body

 

“There may be patients who prefer a visual prosthetic that doesn’t require brain surgery,” Robinson said. “If our work in cells and animal models goes well, MOANA could be approved for clinical tests as a nonsurgical alternative. It would require gene therapy, but not brain surgery.”

In the brain receiving an image, MOANA would “write” information to neurons that are reprogrammed to fire in response to magnetic signals. The gene therapy payload delivered to these neurons will create proteins that tether either naturally occurring or synthetic iron nanoparticles to ion channels inside the neurons. The release of calcium through these ion channels is what “fires” a neuron, causing it to actively transmit an electrical impulse.

 

RELATED
Scientists lay out plans to build an artificial human genome by 2036

 

“We plan to use magnetic fields to heat the iron, which in turn will open the channel and fire the neuron,” Robinson said. “But it’s not enough to do that every second or two. Our system must respond in milliseconds for the receiver and perceiver to experience the perception close enough in time that it seems simultaneous.”

Human thought involves the coordinated firing of many neurons, sometimes in different regions of the brain. Rice co-investigator Caleb Kemere said the quality of communication that can be achieved with 16 channels of information is an open question.

 

RELATED
New "revolutionary" Google AI training method trains AI's at the edge and protects user privacy

 

“We know that the circuits of the brain that are involved are very dense,” said Kemere, an associate professor of electrical and computer engineering and of bioengineering who has previously studied neural circuits using invasive technologies. “It’s possible or even likely that early 16-channel demonstrations may deliver somewhat muddied perceptions, but this is an exciting path towards a more noninvasive future. The timing, density and performance of the systems we are developing will be orders of magnitude more sophisticated than anything currently available.”

Related Posts

Comments (6)

Here’s a Petition to Make the Right to Mental Privacy one of our new Human Rights :

https://www.change.org/p/make-the-right-to-mental-privacy-one-of-our-new-human-rights

If I’m connected to equipment that maintains 24 hour telekinesis extractions and involuntary communication, how do I disconnect and retrieve the specifics accumulated within the data that’s been extracted without my consent.

Well, without any robust platform, policy, or safeguards in place you probably wouldn’t be able to. It would be possible if these were in place though but then again look at today’s situation with “normally generated data” where it’s almost impossible to extract/ get your data back … IMHO it’s a system and policy issue but companies have to create and implement them and will they want to or be motivated to? probably not …

[…] DARPA human telepathy project will let people transmit images into other people’s brains […]

[…] DARPA human telepathy project will let people transmit images into other people’s brains […]

[…] DARPA human telepathy project will let people transmit images into other people’s brains […]

Leave a comment

You have Successfully Subscribed!

Pin It on Pinterest

Share This