0

WHY THIS MATTERS IN BRIEF

Mark Zuckerberg wants to create the world’s largest, and maybe only, telepathic network and he’s slowly pulling all the pieces together.

 

As if Facebook wasn’t already pervasive enough in everyday life, with the total number of users reaching two billion people, the company’s newly formed Building 8 “Moon Shot” factory, is working on a radical new Brain Machine Interface (BMI) device that they say would let people “type out” up to a hundred words a minute using just the power of thought, and they’ve hired 60 of the world’s top neuroscientists to achieve their goal, with the first prototype being slated for 2020.

 

RELATED
China's infamous Social Credit system is now coming for businesses

 

The new device, which the team, led by Regina Dugan who used to head up the mad scientists at DARPA, the US military’s bleeding edge research arm, and who now heads up Building 8, are referring to their new toy as a “Neural Prosthetic,” and if everything goes according to plan it will be a non-invasive device that’s strapped onto an individual’s head.

In my mind I’m now imagining Facebook’s end game looking like a tricked out and slimmed down Oculus Rift headset that immerses individuals in Augmented Reality (AR), and Virtual Reality (VR) worlds that they control, develop and interact with using nothing more than their minds. And bearing in mind that Mark Zuckerberg, who’s busy himself at the moment building a AI assistant called “Jarvis,” has officially said he’s going all in on AR and VR, and then given the fact he also wants Facebook to become the world’s first, and largest telepathic network, a field in its own right that’s already making news, I have the sneaky suspicion that I’m not the only one thinking along those lines. It’s inevitable that these technologies will merge so check back with me in about ten years time and let’s see if my futurists hunch is right. By the way, if I’m right you’ll owe me a synthetic beer

 

RELATED
Blockchain could help autonomous swarms of drones and robots communicate securely

 

The new prosthetic’s equally futuristic. It will be criss-crossed with optical fibres that will beam light, photons, from a laser source through people’s skulls specifically into the areas of the brain that controls speech production. Many of today’s BMI devices, such as the ones help patients with Locked In Syndrome (ALS) communicate “telepathically”, like these new “Cyborg” BMI sensors that were developed last year, are invasive and need surgery to be implanted into people’s brains which obviously won’t fly if Facebook’s new toy is ever to hit the mass market, so it’s crucial that the new device is non-invasive

“Once in place the device will sample groups of neurons in the brain’s speech center and analyse the instantaneous changes in optical properties as they fire,” says Dugan, “then, light scattering through the neurons would reveal changes in their shape and configuration as the brain cells and their components – mitochondria, ribosomes and cell nuclei, for example – move.”

 

RELATED
Dubai announces plans to put the entire Emirate onto the blockchain

 

The prosthetic will measure the number and type of photons bounced off of the neurons in the cortex and send that information, wirelessly or initially via a cable, to a computer that uses Artificial Intelligence (AI) and Machine Learning software to interpret and decode the results. That interpretation would then be typed as text onto the screen of a computer, smartphone or other gadget.

“The speech production network in your brain executes a series of planning steps before you speak,” says Mark Chevillet, Building 8’s technical lead, “in this system we’re looking to decode neural signals from the stage just before you actually articulate what you want to say.”

Because the researchers are focusing on a very specific application, speech, they know the prosthetic’s sensors must have millimetre level resolution and be able to sample brain waves at about 300 times per second in order to measure the brain’s speech signals with high fidelity, adds Dugan.

 

RELATED
Amazing new grain sized nanocamera snaps amazing hi def images

 

“This isn’t about decoding random thoughts. This is about decoding the words you’ve already decided to share publically by sending them to the speech [production] center of your brain,” she says.

Chevillet and Dugan are positioning the new project as a potential communication option, on the one hand for people with ALS, where today’s systems “type” at a mere 80 letters a minute, not words, but on the other as a “more fluid Human-Computer interface that supports Facebook’s efforts to promote augmented reality.”

“Even a very simple capability to do something like a Yes No brain click would be foundational for advances in AR,” Dugan says, “and in that respect it becomes a bit like the mouse was in the early computer interface days. Think of it like a ‘brain mouse.’”

“There are challenges delivering this proposed prosthetic,” says Stephen Boppart, a director at the University of Illinois, ”one is whether the changes in the returning light will create patterns unique enough to represent each of the letters, words and phrases needed to translate brain waves into words on a screen, but you might be able to train a person to generate different thought patterns over time that would correspond to a particular word or phrase, and while that’s possible and already been demonstrated, many other challenges remain.”

 

RELATED
New brain research discovers people are great at predicting what'll go viral

 

Dugan acknowledges the challenges but says the team intend to build on key research related to their work, so all you have to do now is sit back and think about controlling and interacting with two billion people in your Facebook Oculus Rift AR-VR world with nothing more than your mind.

Get ready for your head to be packed full of cat videos. Oh the humanity of it all! And who says that one day we won’t be able to beam experiences back into people’s heads using the headset as they do in Total Recall? Or upload information to our brains with just a zap? After all, if we can store information on photons, like the one’s the team aim to project into people’s heads, and if we can overcome the obvious neuroscience and technological hurdles then the world’s our oyster – a virtual oyster of course.

About author

Matthew Griffin

Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, Bloomberg, CNBC, Discovery, RT, Viacom, and WIRED, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Aon, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.

Your email address will not be published. Required fields are marked *