More Brain-like Computers Could Cut IT Energy Costs

The dynamics of magnetic metamaterials offer a path to low-energy, next-gen computing

The public launch of OpenAI’s ChatGPT in November 2022 caused a media sensation and kicked off a rapid proliferation of similar Large Language Models (LLMs). However, the computing power needed to train and run these LLMs and other artificial intelligence (AI) systems is colossal, and the energy requirements are staggering. Training the GPT-3 model behind ChatGPT, for example, required 355 years of single-processor computing time and consumed 284,000 kWh of energy1. This is one example of a task that the human brain handles much more efficiently than a traditional computer, and researchers are investigating the potential of more brain-like (neuromorphic) computing methods that may prove to be more energy efficient. Physical reservoir computing is one such method, using the natural, complex responses of materials to perform challenging computations. Researchers from the University of Sheffield are investigating the use of magnetic metamaterials – structured at the nanoscale to exhibit complex and emergent properties – to perform such computations. In work recently published in Communications Physics, they have demonstrated an ability to tune the system to achieve state-of-the-art performance in different types of computation. Their results show that an array of interconnected magnetic nanorings is a promising architecture for neuromorphic computing systems.

Emergence Could Power More Brain-Like Computers

Anyone who has witnessed the majestic and mesmerising flight of a murmuration of starlings has no doubt wondered how a flock of birds can achieve such synchronised behaviour. This is an example of emergence, where the interactions of simple things lead to complex collective behaviours. But emergence doesn’t only occur in the natural world, and a group at the University of Sheffield is investigating how the emergent behaviour can be engineered in magnetic materials when they are patterned to have nanoscale dimensions.

Dr Tom Hayward, Senior Lecturer in Materials Physics at the University of Sheffield and author of this paper says,

Life is inherently emergent – with simple entities connecting together to give complex behaviours that a single element would not have. It’s exciting because we can take simple things – which hypothetically can be very energy efficient – and make them manifest the kind of complexity we see in the brain. Material computation relies on the fact that many materials that exhibit some form of memory can take an input and transform it into a different output – precisely the properties we need to perform computation. Our system connects a series of tiny magnetic rings into a big ensemble. One individual ring in isolation shows quite simple behaviours. But when we connect them, they interact with each other to give complex behaviours.

Magnets have a number of properties that make them interesting for these kinds of applications: 

  • Firstly, they are non-volatile, with inherent memory – if you stick a magnet to your fridge, it stays put.
  • Brains (and brain-like computers) need to have non-linear responses, taking simple information and performing complicated transforms, and that’s something magnets are naturally good at.
  • There are plenty of ways to make magnets change state and perform computations that use very little energy.
  • And magnets are a well-established technology (used, for example, in hard drives and Magnetoresistive random-access memory (MRAM)), and so there are existing routes to technology integration.

XPEEM Highlights the Underlying Magnetic Dynamics

Key to this research is understanding what’s happening to these magnetic nanorings when they’re connected together – the way that emergence changes the way they change magnetic states.

Read more on Diamond website

New AI-driven tool streamlines experiments

Researchers at the Department of Energy’s SLAC National Accelerator Laboratory have demonstrated a new approach to peer deeper into the complex behavior of materials. The team harnessed the power of machine learning to interpret coherent excitations, collective swinging of atomic spins within a system. 

This groundbreaking research, published recently in Nature Communications, could make experiments more efficient, providing real-time guidance to researchers during data collection, and is part of a DOE-funded project led by Howard University including researchers at SLAC and Northeastern University to use machine learning to accelerate research in materials. 

The team created this new data-driven tool using “neural implicit representations,” a machine learning development used in computer vision and across different scientific fields such as medical imaging, particle physics and cryo-electron microscopy. This tool can swiftly and accurately derive unknown parameters from experimental data, automating a procedure that, until now, required significant human intervention.

Peculiar behaviors

Collective excitations help scientists understand the rules of systems, such as magnetic materials, with many parts. When seen at the smallest scales, certain materials show peculiar behaviors, like tiny changes in the patterns of atomic spins. These properties are key for many new technologies, such as advanced spintronics devices that could change how we transfer and store data. 

To study collective excitations, scientists use techniques such as inelastic neutron or X-ray scattering. However, these methods are not only intricate, but also resource-intensive given, for example, the limited availability of neutron sources. 

Machine learning offers a way to address these challenges, although even then there are limitations. Past experiments used machine learning techniques to enhance the accuracy of X-ray and neutron scattering data interpretation. These efforts relied on traditional image-based data representations. But the team’s new approach, using neural implicit representations, takes a different route. 

Read more on SLAC website