Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, Bloomberg, CNBC, Discovery, RT, Viacom, and WIRED, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Aon, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.
WHY THIS MATTERS IN BRIEF
In the future machine vision will capture even more videos and images than it does today, and synthetic content will create even more, and we need computer chips that can process this insane volume of graphical data.
Love the Exponential Future? Join our XPotential Community, subscribe to the podcast, future proof yourself with courses from XPotential University, read about exponential tech and trends, connect, watch a keynote, or browse my blog.
How many images can your brain – the brain in your head that many say is “the most complex thing in the universe” – process? A couple of hundred per second, a few thousand? Well now your puny mind has been bested good and proper after researchers at the University of Pennsylvania announced they’ve developed a powerful new optical chip for futuristic photonic computing systems that can process almost 2 billion images per second. The device is made up of a neural network that processes information as light without needing components that slow down traditional computer chips, like memory.
The basis of the new chip is a neural network, a system modelled on the way the brain processes information. These networks are made up of nodes that interconnect like neurons, and they even “learn” in a similar way to organic brains by being trained on sets of data, such as recognizing objects in images or words in speech. Over time, they become much better at these tasks.
But rather than electrical signals, the new chip processes information in the form of light. It uses optical wires as its neurons, stacked in multiple layers that each specialize in a particular type of classification.
In tests, the team made a chip measuring 9.3 mm2 (0.01 in2) and put it to work categorizing a series of handwritten characters that resembled letters. After being trained on relevant data sets, the chip was able to classify the images with 93.8 percent accuracy for sets containing two types of characters, and 89.8 percent accuracy for four types.
Most impressively, the chip was able to categorize each character within 0.57 nanoseconds, which would allow it to process 1.75 billion images per second. The team says that this speed comes from the chip’s ability to process information as light, which gives it several advantages over existing computer chips.
“Our chip processes information through what we call ‘computation-by-propagation,’ meaning that unlike clock-based systems, computations occur as light propagates through the chip,” said Firooz Aflatouni, lead author of the study.
“We are also skipping the step of converting optical signals to electrical signals because our chip can read and process optical signals directly, and both of these changes make our chip a significantly faster technology.”
Another advantage is that the information being processed doesn’t need to be stored, so it also saves time by not having to send data to memory, and space in not needing a component for memory at all. The team also says that not storing the data is also more secure, since it prevents any possible leaks.
The next steps for the team are to begin scaling up the chip, and adapting the technology to process other types of data.
“What’s really interesting about this technology is that it can do so much more than classify images,” said Aflatouni. “We already know how to convert many data types into the electrical domain – images, audio, speech, and many other data types. Now, we can convert different data types into the optical domain and have them processed almost instantaneously using this technology.”
All of which means that soon, very soon, the most complex thing in the universe might be a bunch of silicon wafers and an artificial intelligence …
The research was published in the journal Nature.
Source: University of Pennsylvania