0

WHY THIS MATTERS IN BRIEF

Neuromorphic chips that mimic how the human brain operates and carries out tasks still rely on silicon, moving to optical, or light, based computing will make them much faster, letting them process information multiple times faster than todays most advanced chips.

 

Neural networks are taking the world of computing by storm. Researchers have used them to create machines that are learning a huge range of skills that had previously been the unique preserve of humans – everything from object recognition and face recognition all the way across to natural language processing and machine translation.

All these skills, and many more like them are, it now has to be said, becoming routine for machines.

 

RELATED
Lack of Covid-19 vaccine pushes DARPA to develop a therapeutic "Shield"

 

As a consequence, and perhaps because of humanity’s insatiable desire to push the envelope and create Homo Sapiens 2.0, there is an increasing level of interest in building even more capable neural networks that can push the boundaries of artificial intelligence even further and make machines behave more like human brains in the way they learn and handled data. To do this, scientists have been focusing on building neuromorphic chips, circuits that operate in a similar fashion to human neurons and now, a team at Princeton University has found a way to build a neuromorphic chip that uses light to mimic neurons in the brain, and their study has been detailed in Cornell University Library.

Today, we get an answer of sorts thanks to the work of Alexander Tait and pals at Princeton University in New Jersey. The team have built the world’s first integrated silicon photonic neuromorphic chip and, furthermore, they’ve managed to show that it computes at ultrafast speeds.

Optical computing has long been the great white hope of computer science. Photons have significantly more bandwidth than electrons and so can process more data more quickly. But the advantages of optical data processing systems have never outweighed the cost of making them, so their adoption has been notoriously slow, but now that’s started to change and neural networks are opening up a new opportunity for photonics.

“Photonic neural networks leveraging silicon photonic platforms could access new regimes of ultrafast information processing for radio, control, and scientific computing,” said Tait.

At the heart of the Princeton teams challenge was the desire to create an optical device where each photonic node had the same response characteristics as a human neuron and, over time, the Princeton University researchers developed the world’s first integrated silicon photonic neuromorphic chip. This optical computing device features 49 circular nodes etched into semi-conductive silicon and each of these “neuron-like” nodes works with a specific wavelength of light which rapidly circulates in the node, and when released, it affects the output of a laser. When the laser output returns to the nodes, it completes the circuit.

 

RELATED
Quantum artificial life created for the first time

 

To test their new chip the team pitted it against a top of the line CPU and it’s new optical capabilities didn’t fail to impress – the chip crunched a mathematical differential equation nearly 2,000 times faster than it’s traditional CPU cousin.

“The effective hardware acceleration factor of the photonic neural network is estimated to be 1,960 × in this task,” said Tait, “that’s a speed up of three orders of magnitude.”

That opens the doors to an entirely new industry that could bring optical computing into the mainstream for the first time.

“Silicon photonic neural networks could represent first forays into a broader class of silicon photonic systems for scalable information processing,” said Tait.

Of course much depends on how well the first generation of electronic neuromorphic chips perform and photonic neural nets will have to offer significant advantages over traditional systems in order to be widely adopted but there are a lot of promising signs. As a result optical computing and the ultrafast processing speeds it is capable of could be the driving force behind tomorrow’s machine learning tools, algorithms that predict trends in the stock market, wearable tech that can detect diseases and super-smart drones that can improve agriculture – these, and many more use cases like them could be just the tip of the optical iceberg.

About author

Matthew Griffin

Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, Bloomberg, CNBC, Discovery, RT, Viacom, and WIRED, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Aon, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.

Your email address will not be published. Required fields are marked *