0

WHY THIS MATTERS IN BRIEF

Brain implants and prosthetics play an important role in helping people with neurological conditions getting their lives back together, and now they don’t need to be trained.

 

Interested in the Exponential Future? Join our XPotential Community, future proof yourself with courses from our XPotential Academyconnect, watch a keynote, or browse my blog.

As Brain Machine Interfaces (BMI) that can help paralysed patients communicate, as well as control neuro-prosthetic limbs and exoskeletons improve – both in speed and our ability to implant them easily into patients and even in pigs, like in Elon Musk’s recent demonstration, one of the greatest challenges that patients have is training the implants and getting them to work in the way they want them to so they can get the most out of them and control, for example, fleets of F-35 fighter jets like a team of paralysed volunteers working with the US DOD did recently – among other things.

 

RELATED
Skies beckon as NASA takes delivery of it's first all electric X-Plane

 

Now, in a significant advance, researchers working at the University of California San Francisco have created a BMI that’s about as close to  plug and play as you can get after they used Artificial Intelligence (AI) and machine learning to help a paralysed patient control a computer cursor on a screen using nothing more than his brain activity with no prior training. And as modest as this might sound, in the world of BMI’s that’s ground breaking, it’s the computer equivalent of a really easy-to-use works-first-time and works-with-anything user interface – it’s also likely that in the future all BMI’s will work in this way, and that will make the technology much more accessible to the people who could benefit the most from it.

“The BMI field has made great progress in recent years, but because existing systems have had to be reset and recalibrated each day, they haven’t been able to tap into the brain’s natural learning processes. It’s like asking someone to learn to ride a bike over and over again from scratch,” said study senior author Karunesh Ganguly, MD, PhD, an associate professor of in the UC San Francisco Department of Neurology. “Adapting an artificial learning system to work smoothly with the brain’s sophisticated long-term learning schemas is something that’s never been shown before in a paralyzed person.”

 

RELATED
Top scientists met in secret to discuss creating the first artificial humans

 

The achievement of “plug and play” performance demonstrates the value of so-called ECoG electrode arrays for BMI applications. An ECoG array comprises a pad of electrodes about the size of a Post-it note that is surgically placed on the surface of the brain. They allow long-term, stable recordings of neural activity and have been approved for seizure monitoring in epilepsy patients. In contrast, past BMI efforts have used “pin-cushion” style arrays of sharp electrodes that penetrate the brain tissue for more sensitive recordings but tend to shift or lose signal over time. In this case, the authors obtained investigational device approval for long-term chronic implantation of ECoG arrays in paralysed subjects to test their safety and efficacy as long-term, stable BMI implants.

In their new paper, published in Nature Biotechnology, Ganguly’s team documents the use of an ECoG electrode array in an individual with paralysis of all four limbs, a condition known as Tetraplegia. The participant is also enrolled in a clinical trial designed to test the use of ECoG arrays to allow paralysed patients to control a prosthetic arm and hand, but in the new paper, the participant used the implant to control a computer cursor on a screen instead.

 

RELATED
New stem cell therapy freezes Multiple Sclerosis in its tracks

 

The researchers developed a BMI algorithm that uses machine learning to match brain activity recorded by the ECoG electrodes to the user’s desired cursor movements. Initially, the researchers followed the standard practice of resetting the algorithm each day. The participant would begin by imagining specific neck and wrist movements while watching the cursor move across the screen. Gradually the computer algorithm would update itself to match the cursor’s movements to the brain activity this generated, effective passing control of the cursor over to the user. However, starting this process over every day put a severe limit on the level of control that could be achieved. It could take hours to master control of the device, and some days the participant had to give up altogether.

The researchers then switched to allow the algorithm to continue updating to match the participant’s brain activity without resetting it each day, and they found that the continued interplay between brain signals and the machine learning-enhanced algorithm resulted in continuous improvements in performance over many days. Initially there was a little lost ground to make up each day, but soon the participant was able to immediately achieve top level performance.

 

RELATED
MIT researchers use AI and a smartphone to see round corners

 

“We found that we could further improve learning by making sure that the algorithm wasn’t updating faster than the brain could follow — a rate of about once every 10 seconds,” said Ganguly, a practicing neurologist with UCSF Health and the San Francisco Veterans Administration Medical Center’s Neurology & Rehabilitation Service. “We see this as trying to build a partnership between two learning systems – brain and computer – that ultimately lets the artificial interface become an extension of the user, like their own hand or arm.”

Over time, the participant’s brain was able to amplify patterns of neural activity it could use to most effectively drive the artificial interface via the ECoG array, while eliminating less effective signals – a pruning process much like how the brain is thought to learn any complex task, the researcher say. They observed that the participant’s brain activity seemed to develop an ingrained and consistent mental “model” for controlling the BCI interface, something that had never occurred with daily resetting and recalibration. When the interface was reset after several weeks of continuous learning, the participant rapidly re-established the same patterns of neural activity for controlling the device – effectively retraining the algorithm to its former state.

 

RELATED
An AI learned to surf and use tools after playing 500 million games of hide and seek

 

“Once the user has established an enduring memory of the solution for controlling the interface, there’s no need for resetting,” Ganguly said. “The brain just rapidly convergences back to the same solution.”

Eventually, once expertise was established, the researchers showed they could turn off the algorithm’s need to update itself altogether, and the participant could simply begin using the interface each day without any need for retraining or recalibration. Performance did not decline over 44 days in the absence of retraining, and the participant could even go days without practicing and see little decline in performance. The establishment of stable expertise in one form of BMI control (moving the cursor) also allowed researchers to begin “stacking” additional learned skills — such as “clicking” a virtual button — without loss of performance.

 

RELATED
Lifting the curtain on NeuraLink, Elon Musk's adventure to connect humans and machines

 

Such immediate “plug and play” BCI performance has long been a goal in the field, but has been out of reach because the “pincushion-style” electrodes used by most researchers tend to move over time, changing the signals seen by each electrode. Also, because these electrodes penetrate brain tissue, the immune system tends to reject them, gradually impairing their signal. ECoG arrays are less sensitive than these traditional implants, but their long-term stability appears to compensate for this shortcoming. The stability of ECoG recordings may be even more important for long-term control of more complex robotic systems such as artificial limbs, a key goal of the next phase of Ganguly’s research.

“We’ve always been mindful of the need to design technology that doesn’t end up in a drawer, so to speak, but which will actually improve the day-to-day lives of paralysed patients,” Ganguly said. “The data shows that ECoG-based BMIs could be the foundation for such a technology.”

About author

Matthew Griffin

Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, Bloomberg, CNBC, Discovery, RT, Viacom, and WIRED, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Aon, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.

Your email address will not be published. Required fields are marked *