WHY THIS MATTERS IN BRIEF
Transistors capable of self-learning has many benefits, including helping create robots that teach themselves new skills without human help or programming.
Interested in the Exponential Future? Join our XPotential Community, future proof yourself with courses from our XPotential Academy, connect, watch a keynote, or browse my blog.
Moore’s Law might be slowing down, but the number of companies that are succeeding in building new and sometimes insane sounding transistors is growing – whether it’s the development of 5nm, 3nm, 1nm, 0.5nm transistors, or even 0nm transistors that literally have no size, or atom sized transistors or even liquid transistors.
Now, as we look towards a computing future where everything, even nanobots and the human body can be turned into computing devices, there’s a new type of transistor in the labs – a so called “Neurotransistor,” or to put it another way the synapses in the human brain but in silicon form which could come in especially handy in helping us build so called Generalised robots that learn by themselves without having to be programmed how to do everything – like the ones Google recently showed off that now, essentially, have their own intuition just as Google are building today.
The new neurotransistor could also come in very handy in helping create an entirely new breed of hitherto unknown “hybrid” robots after scientists elsewhere recently finally managed to connect both biological neurons and artificial sneurons together to create the world’s first fully interoperable hybrid network.
In the field of Artificial Intelligence (AI), certain activities like teaching robots to walk or autonomous cars to drive, or precise automatic image recognition, demand ever more powerful, yet at the same time more economical computer chips. While the optimization of conventional microelectronics is slowly reaching its physical limits, nature offers us a blueprint how information can be processed and stored quickly and efficiently – our own brain.
For the very first time, scientists at TU Dresden and the Helmholtz-Zentrum Dresden-Rossendorf (HZDR) have now successfully imitated the functioning of brain neurons using semiconductor materials. They have published their research results in the journal Nature Electronics.
Today, enhancing the performance of microelectronics is usually achieved by reducing component size, especially of the individual transistors on the silicon computer chips. “But that can’t go on indefinitely – we need new approaches,” Larysa Baraban asserts. The physicist, who has been working at HZDR since the beginning of the year, is one of the three primary authors of the international study, which involved a total of six institutes. One approach is based on the brain, combining data processing with data storage in an artificial neuron.
“Our group has extensive experience with biological and chemical electronic sensors,” Baraban continues. “So, we simulated the properties of neurons using the principles of biosensors and modified a classical field-effect transistor to create an artificial neurotransistor.”
The advantage of this particular architecture means that the simultaneous storage and processing of information can now take place in a single component. In conventional transistor technology, they are separated, which slows processing time and hence ultimately also limits performance.
Modelling computers on the human brain is no new idea and this new transistor, which is essentially a silicon wafer with some polymer, now means it’s even easier to create computer chips that can learn for themselves – very much in the same way that humans do.
Scientists made attempts to hook up nerve cells to electronics in Petri dishes decades ago. “But a wet computer chip that has to be fed all the time is of no use to anybody,” says Gianaurelio Cuniberti from TU Dresden, and now, finally they’ve managed to solve that problem
“In order to [create] the transistor we applied a viscous substance, called solgel, to a conventional silicon wafer with circuits. This polymer hardened and became a porous ceramic material,” the materials science professor explains. “Ions then move between the holes. They are heavier than electrons and slower to return to their position after excitation. This delay, called hysteresis, is what causes the storage effect.”
As Cuniberti explains, this is a decisive factor in the functioning of the transistor.
“The more an individual transistor is excited, the sooner it will open and let the current flow. This strengthens the connection, and now the system is learning [for itself].”
“Computers based on our chip would be less precise and tend to estimate mathematical computations rather than calculating them down to the last decimal,” the scientist explains – which is exactly how the human brain works and what makes “us” us. “But they would be more intelligent. For example, a robot with such processors would learn to walk or grasp by itself without needing to be programmed first; it would possess an optical system and learn to recognize connections. And all this without having to develop any software.”
But these are not the only advantages of neuromorphic computers. Thanks to their plasticity, which is similar to that of the human brain, they can adapt to changing tasks during operation and, thus, solve problems for which they were not originally programmed for, so while the news of a new transistor might sound like old news it could be very big news in the end …