Can neuromorphic computers lighten the carbon footprint of AI?

Can neuromorphic computers lighten the carbon footprint of AI?

This branch of computer science is inspired by the functioning of the brain to design very energyal processors in energy.

Our brain is the most effective computer: it consumes around 20 watts of energy, a brushing compared to millions of watts consumed by supercomputers. This remarkable energy efficiency, researchers want to be inspired by it to design economical computer flea, which allow the profits of the AI ​​without exploding the energy and environmental bill.

The fact that IT is inspired by our brain has nothing new in itself: neural networks, which have enabled the recent progress of AI, also imitate the functioning of the human brain. But neuromorphic computers wishes to go a notch further. Indeed, on traditional semiconductors, used today to run AI algorithms, the processor that carries out the calculations and the memory that stores the results are physically separated, which generates ineffectiveness. Transfering data from one to the other consumes time and energy. A phenomenon baptized the bottleneck of Von Neumann, which means that the multiplication of computer power, linked to the empirical law of Moore, is accompanied by an ever greater energy voracity.

By offering new chips that are freed from this dichotomy between calculations and memory, neuromorphic computer intends to allow more effective information processing and largely reduced energy consumption (up to a thousand times less than a classic chip).

It is based for this purpose on the properties of spintronic, a branch of physics and electronics which uses not only the electrical load of electrons (like classic electronics), but also their spin, that is to say the rotation of the electron on itself.

The discipline is based on basic research and has been explored for several years. Today she is about to mature, with young shoots that offer first products. What meet the major challenges in terms of energy expenditure caused by major language models.

Neuromorphic computer science begins to be marketed

In a research paper published in Nature in January, several researchers specializing in neuromorphic computers thus claim that their discipline is ripe to emancipate from the academic domain and provide viable products on the market. “We are very close to having neuromorphic systems on the scale, capable of supporting major languages ​​of languages,” said Steve Furber, one of the co-authors of the study, during an interview given at the time of the publication of the latter. “We just need a demonstration in the form of a” killer app “.”

Last year, the American company Intel built the largest neuromorphic ecosystem in the world, known as the code Hala Point. According to Intel, it is the first large -scale neuromorphic system that surpasses the efficiency and performance of architectures based on the CPU and the GPU for IA workloads in real time.

The Israeli company Hailo AI for its part presented its neuromorphic chips dedicated to the generative AI and the automobile during the last edition of the CES.

On the French side, a dynamic ecosystem is also being set up, carried by the excellence of some basic research laboratories, including the CNRS and the CEA-Leti. One of the Fathers of Spintronique, Albert Fert, Nobel Prize in Physics 2007, is also French.

Among the young hexagonal shoots, let us mention in particular Golana Computing, from the Spintec laboratory, in Grenoble, or even Spin Ion, young shoot from the CNRS. Both offer neuromorphic chips cut for computer science on the outskirts, or Edge Computing. Indeed, the considerable gain in terms of efficiency and energy expenditure makes it possible to carry out more complex operations on small devices, facilitating the deployment of AI on smartphones and connected objects.

Self -learning systems in the service of greater confidentiality

But that’s not all. Compared to conventional fleas, neuromorphic chips make it easier to deploy AI systems that continue to learn in real time, on the model of the human brain. The idea is to cause a model by traditional means, then load it in the neuromorphic chip. The model can then train in its environment and continue to learn new tasks. It is therefore not necessary to go back by the cloud to completely reread the chip with the new data, it is capable of continuing to learn independently.

Traditionally, an autonomous car is regularly updated by transfers its new data to the cloud (for example an obstacle that she had never seen before). The model is then resurgeted and reinjected into the autonomous car. With a neuromorphic chip, he could re -training himself, in the car, without requiring data transfer. Same thing for a connected medical device, for example, or for industrial robots.

Mercedes-Benz recently launched a research project with a Canadian university in this perspective. “Neuromorphic computers can reduce the energy necessary to process autonomous driving data by 90% compared to current systems,” said the company. “Safety systems could, for example, recognize traffic signs, queues and other more vivid and more efficient roads.”

In addition to the energy economy that is carried out by limiting data transfers, this system has considerable advantages in terms of privacy: indeed, fewer data transfers means fewer potential leaks, but attack opportunities for hackers, but also better protection in the face of American extraterritorial laws like the Cloud Act, which allows American authorities to access data stored on an American cloud They are accommodated on servers abroad.

The adoption of this technology, however, collides with several brakes. In particular, it involves new programming and computer architectures languages ​​which are not compatible with existing technologies. It therefore requires the emergence of a new ecosystem of developers and software likely to code and offer neuromorphic applications.

Jake Thompson
Jake Thompson
Growing up in Seattle, I've always been intrigued by the ever-evolving digital landscape and its impacts on our world. With a background in computer science and business from MIT, I've spent the last decade working with tech companies and writing about technological advancements. I'm passionate about uncovering how innovation and digitalization are reshaping industries, and I feel privileged to share these insights through MeshedSociety.com.

Leave a Comment