Efficient training for artificial intelligence

New physics-based self-learning machines could replace the current artificial neural networks and save energy

Learning with light: This is what the dynamics of a light wave employed inside a
Learning with light: This is what the dynamics of a light wave employed inside a physical self-learning machine could look like. Crucial are both its irregular shape and that its development is reversed exactly from the time of its greatest extent (red). © Florian Marquardt, MPL
Artifical intelligence not only affords impressive performance, but also creates significant demand for energy. The more demanding the tasks for which it is trained, the more energy it consumes. Víctor López-Pastor and Florian Marquardt, two scientists at the Max Planck Institute for the Science of Light in Erlangen, Germany, present a method by which artificial intelligence could be trained much more efficiently. Their approach relies on physical processes instead of the digital artificial neural networks currently used.

The amount of energy required to train GPT-3, which makes ChatGPT an eloquent and apparently well-informed Chatbot, has not been revealed by Open AI, the company behind that artificial intelligence (AI). According to the German statistics company Statista, this would require 1000 megawatt hours - about as much as 200 German households with three or more people consume annually. While this energy expenditure has allowed GPT-3 to learn whether the word 'deep' is more likely to be followed by the word 'sea' or 'learning' in its data sets, by all accounts it has not understood the underlying meaning of such phrases.

Neural networks on neuromorphic computers

In order to reduce the energy consumption of computers, and particularly AI-applications, in the past few years several research institutions have been investigating an entirely new concept of how computers could process data in the future. The concept is known as neuromorphic computing. Although this sounds similar to artificial neural networks, it in fact has little to do with them as artificial neural networks run on conventional digital computers. This means that the software, or more precisely the algorithm, is modelled on the brain's way of working, but digital computers serve as the hardware. They perform the calculation steps of the neuronal network in sequence, one after the other, differentiating between processor and memory.

"The data transfer between these two components alone devours large quantities of energy when a neural network trains hundreds of billions of parameters, i.e. synapses, with up to one terabyte of data" says Florian Marquardt, director of the Max Planck Institute for the Science of Light and professor at the University of Erlangen. The human brain is entirely different and would probably never have been evolutionarily competitive, had it worked with an energy efficiency similar to that of computers with silicon transistors. It would most likely have failed due to overheating.

The brain is characterized by undertaking the numerous steps of a thought process in parallel and not sequentially. The nerve cells, or more precisely the synapses, are both processor and memory combined. Various systems around the world are being treated as possible candidates for the neuromorphic counterparts to our nerve cells, including photonic circuits utilizing light instead of electrons to perform calculations. Their components serve simultaneously as switches and memory cells.

A self-learning physical machine optimizes its synapses independently

Examples of reversible, non-linear processes can be found in optics. Indeed, Víctor López-Pastor and Florian Marquardt are already collaborating with an experimental team developing an optical neuromorphic computer. This machine processes information in the form of superimposed light waves, whereby suitable components regulate the type and strength of the interaction. The researchers’ aim is to put the concept of the self-learning physical machine into practice. "We hope to be able to present the first self-learning physical machine in three years," says Florian Marquardt. By then, there should be neural networks which think with many more synapses and are trained with significantly larger amounts of data than today's.

As a consequence there will likely be an even greater desire to implement neural networks outside conventional digital computers and to replace them with efficiently trained neuromorphic computers. "We are therefore confident that self-learning physical machines have a strong chance of being used in the further development of artificial intelligence," says the physicist.

PH/MPG Using a metabolic pathway, energy-rich resources can be produced via the power of electricity Researchers of the Max Planck Institute for Security and Privacy participate in innovative science communication The Face Game is a new online experiment that explores how chatbots will choose to appear to humans Understanding how short-circuits occur in solid-state batteries could extend their lifespan   An addition of titanium makes a thermoelectric material more efficient Ammonia synthesized in sun-rich countries could facilitate sustainable ironand steelmaking   A novel microstructure design strategy enables lean medium-manganese steels with optimized strength and ductility Many publications by Max Planck scientists in 2022 were of great social relevance or met with a great media response. We have selected 12 articles to present you with an overview of some noteworthy research of the year