Novel AI Method Sharpens 3D X-ray Vision

NSLS-II scientists see around hidden corners of tiny objects, even when significant portions of data are missing

X-ray tomography is a powerful tool that enables scientists and engineers to peer inside of objects in 3D, including computer chips and advanced battery materials, without performing anything invasive. It’s the same basic method behind medical CT scans. Scientists or technicians capture X-ray images as an object is rotated, and then advanced software mathematically reconstructs the object’s 3D internal structure. But imaging fine details on the nanoscale, like features on a microchip, requires a much higher spatial resolution than a typical medical CT scan — about 10,000 times higher.

The Hard X-ray Nanoprobe (HXN) beamline at the National Synchrotron Light Source II (NSLS-II), a U.S. Department of Energy (DOE) Office of Science user facility at DOE’s Brookhaven National Laboratory, is able to achieve that kind of resolution with X-rays that are more than a billion times brighter than traditional CT scans.

Tomography only works well when these projection images can be taken from all angles. In many real-world cases, however, that’s impossible. For example, scientists can’t spin a flat computer chip around 180 degrees without blocking some of the X-rays. When parallel to the surface at high angles, fewer X-rays can penetrate the chip, limiting the viewing angles of the measurement. The missing data from this angular range produces a “blind spot,” leading the reconstruction software to produce blurry, distorted images.

“We call this the ‘missing wedge’ problem,” said Hanfei Yan, lead beamline scientist at the HXN beamline and corresponding author of this work. “For decades, this problem has limited the applications of X-ray and electron tomography in many areas of science and technology.”

Read more on the BNL website

Image: This 3D image of an integrated circuit showing slices through its thickness was reconstructed with a new technique that incorporates artificial intelligence called the “perception fused iterative tomography reconstruction engine.”

Credit: Brookhaven National Laboratory

A New Magnetic State for the AI Era: Demonstrating “Alternating Magnetism” in Ruthenium Dioxide Thin Films

—Toward the Development of High-Speed, High-Density Memory for AI and Data Centers—

Background and Challenges
Ruthenium dioxide (RuO₂) has long been regarded as a promising candidate for exhibiting “altermagnetism,” the so-called third type of magnetism. Conventional ferromagnets can be easily written with external magnetic fields, but stray fields cause recording errors, posing a fundamental obstacle to high-density memory. Antiferromagnets are resistant to external disturbances such as stray fields; however, because atomic spins (N–S poles) cancel each other out, electrical readout is extremely challenging.

This created the demand for a new class of magnetic material that combines the best of both worlds—robustness against disturbances while still enabling electrical readout and, potentially, future rewriting. Yet, worldwide experimental results on RuO₂’s altermagnetism have been inconsistent, and the lack of high-quality thin films with uniform crystal orientation prevented definitive demonstration.

Key Achievements
A collaborative research team from NIMS, the University of Tokyo, Kyoto Institute of Technology, and Tohoku University succeeded in fabricating single-variant RuO₂ thin films with aligned crystal orientation on sapphire substrates. By optimizing substrate choice and growth conditions, the team clarified the mechanism that determines orientation.

Using X-ray magnetic linear dichroism (XMLD) measurements at the Photon Factory synchrotron facility of KEK, the researchers identified both the magnetic order—where total magnetization (N–S poles) cancels out—and the spin orientation. They further observed spin-split magnetoresistance, a phenomenon in which electrical resistance changes depending on spin orientation, thereby confirming electronic differences in spin states by electrical means.

Read more on the KEK website

Image: Conceptual illustration of altermagnetism in single-variant RuO₂ thin films, showing XMLD signals and spin orientations

Diamond scientists win RSC prize for chemistry-aware AI software

Four scientists from Diamond have been awarded the Materials Chemistry Horizon Prize for their work on accelerating data-driven chemical materials discovery.

The winning AI for Materials team includes Diamond’s Phil Chater, Francesco Carla, Chris Nicklin, and Jonathan Rawle.

The prize honours their exceptional work in developing chemistry-aware artificial intelligence software. The work includes applying this advanced technology to data-driven materials discovery and providing open-source materials databases and language models for the global scientific community.

The team from Diamond were very pleased to contribute to this project that involved a large multinational team. It has been a great collaborative effort to develop the use of artificial intelligence in materials discovery.

Chris Nicklin, Diamond’s Deputy Director of Physical Sciences

Diamond’s four winners were part of a team that includes AI-experts from Cambridge and US supercomputing specialists at Argonne National Laboratory, supported by researchers from around the globe. This included scientists from ISIS Neutron and Muon Source and the Research Complex at Harwell.

The team developed ChemDataExtractor, the first chemistry-aware text mining tool. The materials-domain-specific language software provides an interactive way for scientists to ask questions, similar to the ChatGPT model.

They were able to demonstrate data-driven materials discovery in less than one year, vastly reducing the average 20 year timeframe it usually takes industry to discover new material for a given application.

The resulting high-quality experimental databases and chemistry-specific language models will now help guide scientific decisions and speed up research. To mark their achievements, the team will receive a trophy, and each team member will be presented with a special individual token. Additionally, their remarkable work will be showcased in a special video.

Rea more on Diamond website

Scientists Use AI and X-ray Vision to Gain Insight into Battery Electrolyte

Artificial intelligence and experimental validation reveal atomic-scale basis for improved ‘water-in-salt’ battery performance

UPTON, N.Y. — A team of scientists from the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory and Stony Brook University (SBU) used artificial intelligence (AI) to help them understand how zinc-ion batteries work — and potentially how to make them more efficient for future energy storage needs. Their study, published in the journal PRX Energy, focused on the water-based electrolyte that shuttles electrically charged zinc ions through the rechargeable battery during charging and use. The AI model tapped into how those charged ions interact with water under varying concentrations of zinc chloride (ZnCl2), a form of salt with high solubility in water.

The AI findings, validated by experiments at Brookhaven Lab’s National Synchrotron Light Source II (NSLS-II), show why high salt concentrations produce the best battery performance.

“AI is an important tool that can facilitate the advancement of science,” said Esther Takeuchi, chair of the Interdisciplinary Science Department (ISD) at Brookhaven Lab and the William and Jane Knapp Chair in Energy and the Environment at SBU. “The research done by this team provides an example of the insights that can be gained by combining experiment and theory enhanced by the use of AI.”

Amy Marschilok, manager of the Energy Storage Division of ISD and a professor of chemistry at SBU, added, “This work could help advance the development of robust zinc-ion batteries for large-scale energy storage. These batteries are particularly attractive for resilient energy applications because the water-based electrolyte is inherently safe and the materials use to make them are abundant and affordable.”

Water in salt

Like all batteries, zinc-ion batteries convert energy from chemical reactions into electrical energy, explained Deyu Lu, a staff scientist in the Theory and Computation Group of Brookhaven Lab’s Center for Functional Nanomaterials (CFN) who led this research.

“However, competing chemical reactions, such as those that split water molecules and produce hydrogen gas, can severely degrade battery performance,” he said. “If any of this energy is used in side reactions, you lose energy that is supposed to do work.”

Lu and his collaborators knew that previous studies had found that water splitting is suppressed in a special zinc chloride electrolyte where the salt concentration is so high it’s referred to as “water-in-salt,” in contrast to more common “salt-in-water” electrolytes. To figure out why the high-salt version was better, they wanted to capture the atomic-scale details of how zinc and chloride ions move and interact with water — and how that affects the electrolyte’s conductivity — at different salt concentrations.

But seeing these atomic-scale details is extremely challenging. So the team turned to a form of computer modeling enhanced by AI vision.

Developing AI vision

“Seeing these complex details would be impossible using conventional computing techniques,” Lu said. “Conventional simulation methods cannot handle the large number of atomic interactions with the desired accuracy to capture the timescales over which such systems evolve. Such calculations require enormous computing power, which would easily take many years.”

So instead of performing all the complex calculations that would be needed to fully simulate the ions’ interactions with water, the team used conventional simulations to generate a small number of simulation data, known as a “training set,” and fed it to an AI program. They used computing resources at the Theory and Computational Facility at CFN, a DOE Office of Science user facility, and Brookhaven Lab’s Scientific Computing and Data Facilities within the Computing and Data Sciences directorate (CDS).

“We needed a little bit of data collected by calculating a small number of interactions to kickstart the process of training an initial model,” said CDS’s Chuntian Cao, first author on the paper. “Then, we ran the model to generate more data to continue to improve the model’s predictions.”

At each step, the scientists ran their results through an ensemble of machine learning (ML) models to assess whether the predictions were accurate. Lu likened the process to calling several friends to help answer questions on “Who Wants to be a Millionaire,” a once-popular TV game show. “If the friends/models all agree, then it looks like you have good chance that you have an accurate prediction,” he noted.

But, as Cao pointed out, “When we find that some predictions have very large deviations in the ensemble of ML models, we return to doing the conventional calculations to get the correct answer. These new corrected data points are then added back to the training data to further refine the ML model.”

This iterative “active learning” process minimized the number of calculations that needed to be run in a computationally expensive way to complete the training of the ML model. And, after several rounds of training, the AI model could make predictions about much larger numbers of atomic interactions over longer and longer timescales.

“Chuntian ran the simulations with several thousands of atoms, a very large system, for hundreds of nanoseconds — an impossible task using the conventional methods. AI/ML is truly a game changer in the study of complex materials,” Lu said.

Stablizing water

The Brookhaven and Stony Brook scientists’ AI model revealed that high zinc chloride concentrations play the key role in stabilizing water molecules, protecting them from splitting.

In pure water, the oxygen atom in one water molecule (H2O) forms two so-called hydrogen bonds with hydrogen atoms in neighboring water molecules. These hydrogen bonds connect the water moleclues in a continuous network that makes the water molecules more reactive and susceptible to splitting, Lu said.

The team found that the number of hydrogen bonds drops rapidly as the zinc chloride concentration increases, disrupting the hydrogen-bond network. In the water-in-salt regime, only about 20% of the hydrogen bonds are left.

“Stabilizing the water molecules is an essential component of why high-concentration water-in-salt electrolytes work so well,” said Cao.

Read more on NSLS-II website

Image: Scientists used AI to model how zinc and chloride ions (gray and green spheres) at different concentrations would interact with and move through water (oxygen and hydrogen represented by red and white spheres) in an aqueous battery electrolyte. The AI-assisted modeling revealed that a high concentration of zinc chloride salt solution stabilizes water in the electrolyte while maintaining sufficiently high conductivity — characteristics that are essential for aqueous zinc-ion battery performance.

Credit: Chuntian Cao / Brookhaven National Laboratory

Simultaneous experiment unlocks new collaborative research potential utilising a joint AI platform

Diamond Light Source’s I22 beamline and the ISIS Neutron and Muon Source’s Larmor instrument demonstrate the potential afforded using a cutting-edge technique.

In a groundbreaking experiment, a robotic sample preparation platform, driven by artificial intelligence (AI) was used to undertake simultaneous experiments at both Diamond Light Source, the UK’s National Synchrotron, and the ISIS Neutron and Muon Source, the UK’s National Neutron and Muon Source. While collaborative research between these two science facilities, located on the Harwell campus in South Oxfordshire, is common, this particular experiment has pushed new boundaries with the experiments being performed at the same time on identical robotic set-ups which were in direct communication with each other. 

The simultaneous experiment was driven by two Automated Formulation Laboratories (AFLs), which after a two-year delay with customs, finally made it to the Harwell Campus. They were then installed and operated autonomously on Diamond’s I22 and ISIS’s Larmor small-angle scattering beamlines. The AFLs were produced by research teams from The National Institute of Standards and Technology (NIST), based in Gaithersburg, Maryland, USA, shipped to the Rutherford Appleton Laboratory and then installed at Diamond and ISIS. 

Dr Gregory Smith, from ISIS, said:

We have discussed using the autonomous formulation laboratory for experiments at ISIS for many years, and it was only recently that we considered the idea of running neutron and X-ray measurements in parallel at ISIS and Diamond. Getting one piece of bespoke equipment on site to run an experiment is challenging enough, but it took great effort from many staff here at RAL, from support staff to scientists to engineers, to manage this. I was pleased to finally manage to get the AFLs here and use them as intended, and the exciting results produced by the NIST team justified all this work, resulting in a truly unique experiment.

The experiment investigated paint formulations, making use of small angle scattering to determine properties of the system. The AFL machines, one red and one blue, were linked together across computer networks and worked concurrently, capturing multiple modalities of the formulations synthesised. 

The project had two objectives; the researchers from NIST were testing the AI and robotic elements of the machines, whilst also working on an industrially relevant question – in this case “what is the optimal formulation of a given paint system?”. 

By using both small angle X-ray scattering (SAXS) at the I22 beamline and small angle neutron scattering (SANS) at Larmor, the experiment proved more effective in both the utilisation of beamtime as well as the quality of data collection. SAXS data can be collected more quickly than SANS data, which allowed the experimental team to rule out formulations that weren’t of interest. This allowed a more complete dataset from the significant formulations in a shorter amount of time, as the team could gather insights from the different parts of the system with each technique. Diamond and ISIS offered the unique opportunity for both formulation labs to work in unison at the neutron and X-ray facilities, a situation that is currently only possible in two locations in the world.

Tim Snow, principal software scientist working on Diamond’s I22 beamline, said: 

It is a really good example of both facilities working together to exploit our unique capabilities, acquiring the best data for our users.

The robotic element of the AFL prepares liquid mixtures via pipetting and transfers those mixtures to a measurement cell. Following a SANS or SAXS experiment, the data is analysed by an AI software algorithm. The AI algorithm looks at the collected data to work out what mixture to make next and subsequently what scan to conduct next.  

Read more on Diamond website

Image: The red Automated Formulation Laboratory (AFL) in place on Diamond’s I22 beamline.

Artificial Imagination

A Brookhaven Lab researcher has conceptualized an “exocortex,” an extension of the human brain that will generate inspiration and imagination for scientific discovery

PTON, N.Y. — Artificial intelligence (AI) once seemed like a fantastical construct of science fiction, enabling characters to deploy spacecrafts to neighboring galaxies with a casual command. Humanoid AIs even served as companions to otherwise lonely characters. Now, in the very real 21st century, AI is becoming part of everyday life, with tools like chatbots available and useful for everyday tasks like answering questions, improving writing, and solving mathematical equations.

AI does, however, have the potential to revolutionize scientific research — in ways that can feel like science fiction but are within reach.

At the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory, scientists are already using AI to automate experiments and discover new materials. They’re even designing an AI scientific companion that communicates in ordinary language and helps conduct experiments. And Kevin Yager, the Electronic Nanomaterials Group leader at the Center for Functional Nanomaterials (CFN), has articulated an overarching vision for the role of AI in scientific research.

It’s called a science exocortex — “exo” meaning outside and “cortex” referencing the information processing layer of the human brain. Rather than simple chatbots and scientific assistants, the conceptualized exocortex will be an extension of a scientist’s brain. Researchers will interact with it through conversation, without the need for any invasive brain-computer interfaces.

“An exocortex, realized through software, would serve as a new source of thinking, inspiration, and imagination,” said Yager, whose vision was recently published in Digital Discovery. “If we design and build the exocortex correctly, our interactions with it will feel like those ‘aha’ moments we sometimes have upon waking from sleep or while otherwise ruminating on a problem. You won’t check in with an exocortex; you’ll experience it.”

Yager describes the exocortex as analogous to the layers of the human brain, which developed through the course of human evolution. Over millions of years, the human brain became the information processing masterpiece it is today by accumulating new layers, each one more sophisticated than the last. The bottom of the brain controls basic survival functions, like breathing. Other, more advanced layers tackle increasingly complicated functions, like emotional regulation and language processing. Most importantly, all facets of the brain work together in harmony to form “the human experience.”

“Technologically, we have the potential now to add another, external layer to the brain — one that connects us to AI,” Yager said. “And just like the specialized regions of the brain that coordinate with each other to give emergence to what we call intelligence, the exocortex will integrate individualized AI capabilities to solve a problem or generate creativity.”

An “app store” of AI agents

Compared to the average chatbot, which is a single AI system, the exocortex would be a collection of dozens of AI agents working together — customized to a researcher’s individual needs.

Each agent would be trained to carry out specific science-related tasks. A scientific literature agent, for example, could sift through published papers to find an optimal protocol for an experiment, while another AI agent collects and analyzes data from a running experiment. Additional agents could launch experiments or simulations, compare findings to previous studies, or even propose ideas for subsequent experiments.

All of the agents’ tasks will happen in concert, simultaneously, and without manual intervention, culminating in new insights delivered to the human researcher.

One design aspect of Yager’s proposed exocortex is that the AI agents will communicate with one another in plain English language. This will enable human scientists to study and audit the chains of decisions that lead to a particular AI outcome, providing much-needed opportunities to assess accuracy and exert engineering control.

Yager says the task of building an exocortex is enormous, and the developmental effort should be shared among scientists worldwide, so individual research groups can leverage their own expertise to design new agents. Ideally, scientists will one day have “an app store” from which they can download AI agents that will enhance the abilities of their own exocortex, similar to how downloading new apps adds functionality to phones. Individual AI “apps” could also be efficiently updated and replaced.

“I expect to see a multiplicative effect,” explained Yager. “As scientists simultaneously improve the individual AIs and the foundational exocortex technology, the capabilities of the exocortex will likely grow much faster than people expect.”

Of course, making the exocortex a reality won’t be easy. While scientists have designed a plethora of AIs that can interface with a user and complete specific tasks, building a network of AIs that can interact with each other is an entirely new challenge.  

Yager expects each AI agent to require access to a “catalog” of the other agents and their specialized abilities, so they each can send messages describing the work they’ve done and explaining what they need from other AI agents.

“No one knows how to do this yet,” Yager said. Among the challenges is determining the ideal organization of agents. “Should it be a hierarchy where there is a chief with leaders and employees, like how a company operates? Or should it be more fluid, so the AIs figure out the workflow themselves? There is no obvious answer, and this is an exciting research question about the exocortex design that we are investigating.”

The final output of the exocortex will be a result of some sequence of decisions, planning, execution, verification, and summarization, rather than the simple text that a generative chatbot outputs. This extra iteration, promoted by the communication between AI agents and the exocortex structure, will ultimately improve the output and make the AI even more intelligent.

Read more on BNL website

Unlocking the secrets of proteins

This year’s Nobel Prize in Chemistry goes to three researchers who have made a decisive contribution to cracking the code of proteins – important building blocks of life. However, developing applications from this knowledge, for example in medicine, requires research institutes such as PSI. 

This year’s Nobel Prize in Chemistry came as a surprise in several respects. Firstly, only one of the three scientists chosen, David Baker, is a member of an academic research institution. The other two, Demis Hassabis and John Jumper, work at the Google subsidiary DeepMind. Secondly, the award is based on artificial intelligence (AI). And thirdly, the achievement being recognised draws on an Open Science project that would not have been possible without comprehensive, high-quality, open databases provided by the global scientific community – to which the Paul Scherrer Institute PSI is an important contributor. Given these unusual circumstances, it is easy to overlook the actual reason for awarding the prize. Yet that itself is revolutionary enough: The Nobel Committee is paying tribute to the three scientists for a breakthrough in protein research. Working at the company DeepMind, two of them developed an AI called AlphaFold which is able to predict the spatial structure of a protein with astonishing precision. This structure is a result of the way the molecule is folded, which in turn depends on the sequence of amino acids it contains.

Spatial folding is crucial

It is difficult to assess the full extent of the new possibilities offered by AlphaFold. Proteins and their spatial folding form the central basis of all biological systems – disrupting them can have fatal consequences. The form, function and activity of every single cell are controlled by proteins. This also holds true for the 30 trillion or more cells that make up the human body, or course, including the cells of the immune system and the brain, but also pathologically modified cancer cells. Some extra-cellular structures produced by cells are also made from proteins. These include collagen, which gives skin, bones, tendons and connective tissue their structure and strength. However, until recently scientists were often puzzled as to how the sequence of amino acids, which is relatively easy to determine, gives rise to the three-dimensional configuration.

To determine the spatial structure of proteins, which is crucial for their biological function, researchers had to resort to highly complex X-ray crystallography experiments, which often took years. Only in recent years has it become possible to achieve this by means of a particularly high-resolution form of electron microscopy. X-ray crystallography was first successfully used to determine the structure of a protein in 1959; the protein in question was myoglobin, the mussel protein which is responsible for intramuscular oxygen transport. The scientists led by Max Perutz, who was awarded the Nobel Prize for Chemistry in 1962, turned the protein into a crystal and sent monochromatic X-rays through it, similar to the radiation produced by Swiss Light Source SLS at PSI. The resulting diffraction pattern can be used to determine the folding of the protein chain – and thus provide information about the function of the protein. The location of active centres, for example, which interact with small molecules. 

At the time that AlphaFold was developed, the structure of some 140,000 proteins had been determined experimentally. These are all listed in the Protein Data Bank (PDB), established in 1971, which is freely accessible to scientists and the general public. “More than five percent of the data it contains comes from the Swiss Light Source SLS at PSI,” says Jörg Standfuss, Head of the Laboratory of Biomolecular Research, which focuses on structural biology at the PSI Centre for Life Sciences. Most of the rest comes from other research centres that operate a high-quality X-ray source.

Read more on PSI website

Image: Proteins are involved in all life processes. They are made up of amino acid chains that form complex structures. This structure is crucial to the function of the proteins. That is why being able to predict the structure of a protein based on its amino acid sequence using AI is so important for understanding life and for innovation in medicine and biology.

Credit: hotspianiegra – stock.adobe.com

AI finds a cheaper way to make green hydrogen

Researchers at the University of Toronto are using artificial intelligence to accelerate scientific breakthroughs in the search for sustainable energy. They used the Canadian Light Source (CLS) at the University of Saskatchewan (USask) to confirm that an AI-generated “recipe” for a new catalyst offered a more efficient way to make hydrogen fuel.   

To create green hydrogen, you pass electricity that’s been generated from renewable resources between two pieces of metal in water. This causes oxygen and hydrogen gases to be released. The problem with this process is that it currently requires a lot of electricity and the metals used are rare and expensive.

“We’re talking about hundreds of millions or billions of alloy candidates, and one of them could be the right answer,” said Jehad Abed. He was part of a team that developed a computer program to significantly speed up this search. Their findings were published in the Journal of the American Chemical Society. At the time of this project, Abed was a PhD student under the supervision of Edward Sargent at the University of Toronto working alongside scientists at Carnegie Mellon University.  

Researchers are searching for the right alloy, or combination of metals, that would act as a catalyst to make this reaction more efficient and affordable. Traditionally, this search would involve trial and error in the lab, but when you are trying to find the proverbial needle in a haystack, this approach takes too much time.

The AI program the team developed took over 36,000 different metal oxide combinations and ran virtual simulations to assess which combination of ingredients might work the best. Abed then tested the program’s top candidate in the lab to see if its predictions were accurate.

The team used the CLS’s ultra-bright X-rays to analyze the catalyst’s performance during a reaction. “What we needed to do is use that very bright light at the Canadian Light Source to shine it on our material and see how the atomic arrangements would change and respond to the amount of electricity that we put in,” said Abed. The researchers also used the Advanced Photon Source at the Argonne National Laboratory in Chicago.

Read more on CLS website

New artificial intelligence method to create material ​‘fingerprints’

Like people, materials evolve over time. They also behave differently when they are stressed and relaxed. Scientists looking to measure the dynamics of how materials change have developed a new technique that leverages X-ray photon correlation spectroscopy (XPCS), artificial intelligence (AI) and machine learning.

This technique creates ​“fingerprints” of different materials that can be read and analyzed by a neural network to yield new information that scientists previously could not access. A neural network is a computer model that makes decisions in a manner similar to the human brain.

In a new study by researchers in the Advanced Photon Source (APS) and Center for Nanoscale Materials (CNM) at the U.S. Department of Energy’s (DOE) Argonne National Laboratory, scientists have paired XPCS with an unsupervised machine learning algorithm, a form of neural network that requires no expert training. The algorithm teaches itself to recognize patterns hidden within arrangements of X-rays scattered by a colloid — a group of particles suspended in solution. The APS and CNM are DOE Office of Science user facilities.

“The goal of the AI is just to treat the scattering patterns as regular images or pictures and digest them to figure out what are the repeating patterns. The AI is a pattern recognition expert.” — James (Jay) Horwath, Argonne National Laboratory

“The way we understand how materials move and change over time is by collecting X-ray scattering data,” said Argonne postdoctoral researcher James (Jay) Horwath, the first author of the study.

These patterns are too complicated for scientists to detect without the aid of AI. ​“As we’re shining the X-ray beam, the patterns are so diverse and so complicated that it becomes difficult even for experts to understand what any of them mean,” Horwath said.

For researchers to better understand what they are studying, they have to condense all the data into fingerprints that carry only the most essential information about the sample. ​“You can think of it like having the material’s genome, it has all the information necessary to reconstruct the entire picture,” Horwath said.

The project is called Artificial Intelligence for Non-Equilibrium Relaxation Dynamics, or AI-NERD. The fingerprints are created by using a technique called an autoencoder. An autoencoder is a type of neural network that transforms the original image data into the fingerprint — called a latent representation by scientists — and that also includes a decoder algorithm used to go from the latent representation back to the full image.

The goal of the researchers was to try to create a map of the material’s fingerprints, clustering together fingerprints with similar characteristics into neighborhoods. By looking holistically at the features of the various fingerprint neighborhoods on the map, the researchers were able to better understand how the materials were structured and how they evolved over time as they were stressed and relaxed.

AI, simply put, has good general pattern recognition capabilities, making it able to efficiently categorize the different X-ray images and sort them into the map. ​“The goal of the AI is just to treat the scattering patterns as regular images or pictures and digest them to figure out what are the repeating patterns,” Horwath said. ​“The AI is a pattern recognition expert.”

Using AI to understand scattering data will be especially important as the upgraded APS comes online. The improved facility will generate 500 times brighter X-ray beams than the original APS. ​“The data we get from the upgraded APS will need the power of AI to sort through it,” Horwath said.

Read more on Argonne website

Image: The AI-NERD model learns to produce a unique fingerprint for each sample of XPCS data. Mapping fingerprints from a large experimental dataset enables the identification of trends and repeating patterns which aids our understanding of how materials evolve.

Credit: Argonne National Laboratory.

The Long Read: The AI revolution

For what was once a purely technical subject, machine learning has hardly been out of the news. Beginning in late 2022, the world has had to come to terms with the impact of a number of groundbreaking, generative artificial-intelligence (AI) models – notably the ChatGPT chatbot by the US company OpenAI, and text-to-image systems such as Midjourney, developed by the US company of the same name. Everyday conversations cannot avoid the debate over whether we are living amid a fantastic new industrial revolution – or the end of civilisation as we know it.

All this popular controversy can detract from a quieter – but no less important – machine-learning evolution taking place in the scientific realm. Arguably this began in the 1990s, with greater computing power and the development of so-called neural networks, which attempt to mimic the wiring of the brain, and which helped to popularise AI as an overarching term for machines that ape human thinking. The real acceleration, however, has taken place in the past decade or so, thanks to the storage and processing of “big data”, and experiments with layered neural networks – what has come to be called deep learning.

Of this revolution, synchrotron users – who are among the world’s largest producers of scientific data – stand to be great beneficiaries. Machine learning has the potential to streamline experiments, reduce data volumes, speed up data analysis and obtain results that would otherwise be beyond human insight. “We’ve been amazed in many ways by the results we could produce,” says Linus Pithan, a materials and data scientist based at the German synchrotron DESY, who ran an autonomous crystal-growth experiment at the ESRF’s ID10 beamline with colleagues last year. “The quality of the online data analysis was astonishing.”

Formerly a member of the ESRF’s Beamline Control Unit where he helped develop the new BLISS beamline control system, Pithan is well placed to test the potential of machine learning in synchrotron science. The flexibility of BLISS was necessary for him and his colleagues to integrate their own deep-learning algorithm, which they had trained beforehand to reconstruct scattering-length density (SLD) profiles from the X-ray reflectivity of molecular thin films. Unlike the forwards operation – calculating a reflectivity curve from an SLD profile – this inverse problem can be painfully tedious to solve even for an experienced analyst: the data are inherently ambiguous, because they do not include the phase of the scattered X-rays. Indeed, it is a demanding task for a machine too, which is why at the beamline Pithan’s group made use of an online service known as VISA to harness the ESRF’s central computer system.

The success of the automation was immediately apparent (Figure 1). From the reflectivity measurements, the deep-learning algorithm could output SLD profiles and thin-film properties such as layer thickness and surface roughness in real time, and thereby stop in-situ molecular beam deposition at any desired sample thickness between 80 Å and 640 Å, with an average accuracy of 2 Å [1]. “The machine-learning model was able to ‘predict’ results within milliseconds,” says Pithan. “In a way, we transferred the time that is traditionally needed for the manual fitting process to the point before the actual experiment where we trained the model. So by the time of the experiment, were able to get results instantaneously.”

The ESRF has been anticipating a rise in machine learning for many years. It forms part of the data strategy, and is one of the reasons for the ESRF’s engagement in various European projects that support the trend: PaNOSC, which is a cloud service to host publicly funded photon and neutron research data; DAPHNE, which aims to make photon and neutron data accord to “FAIR” (reusable) principles; and most recently OSCARS, which promotes European open science. Vincent Favre-Nicolin, the head of the ESRF algorithms and scientific data analysis group, is wary of claiming that machine learning is always a “magical” solution, and points out the toll it can take on computing resources. “But for some areas it makes a real difference,” he says.

Read more on ESRF website

Image: Painstaking manual segmentation of ESRF tomographic data reveals the vasculature of a human kidney for the Human Organ Atlas project. It also provides valuable training data for deep-learning algorithms that will be able to do the same job much faster 

More Brain-like Computers Could Cut IT Energy Costs

The dynamics of magnetic metamaterials offer a path to low-energy, next-gen computing

The public launch of OpenAI’s ChatGPT in November 2022 caused a media sensation and kicked off a rapid proliferation of similar Large Language Models (LLMs). However, the computing power needed to train and run these LLMs and other artificial intelligence (AI) systems is colossal, and the energy requirements are staggering. Training the GPT-3 model behind ChatGPT, for example, required 355 years of single-processor computing time and consumed 284,000 kWh of energy1. This is one example of a task that the human brain handles much more efficiently than a traditional computer, and researchers are investigating the potential of more brain-like (neuromorphic) computing methods that may prove to be more energy efficient. Physical reservoir computing is one such method, using the natural, complex responses of materials to perform challenging computations. Researchers from the University of Sheffield are investigating the use of magnetic metamaterials – structured at the nanoscale to exhibit complex and emergent properties – to perform such computations. In work recently published in Communications Physics, they have demonstrated an ability to tune the system to achieve state-of-the-art performance in different types of computation. Their results show that an array of interconnected magnetic nanorings is a promising architecture for neuromorphic computing systems.

Emergence Could Power More Brain-Like Computers

Anyone who has witnessed the majestic and mesmerising flight of a murmuration of starlings has no doubt wondered how a flock of birds can achieve such synchronised behaviour. This is an example of emergence, where the interactions of simple things lead to complex collective behaviours. But emergence doesn’t only occur in the natural world, and a group at the University of Sheffield is investigating how the emergent behaviour can be engineered in magnetic materials when they are patterned to have nanoscale dimensions.

Dr Tom Hayward, Senior Lecturer in Materials Physics at the University of Sheffield and author of this paper says,

Life is inherently emergent – with simple entities connecting together to give complex behaviours that a single element would not have. It’s exciting because we can take simple things – which hypothetically can be very energy efficient – and make them manifest the kind of complexity we see in the brain. Material computation relies on the fact that many materials that exhibit some form of memory can take an input and transform it into a different output – precisely the properties we need to perform computation. Our system connects a series of tiny magnetic rings into a big ensemble. One individual ring in isolation shows quite simple behaviours. But when we connect them, they interact with each other to give complex behaviours.

Magnets have a number of properties that make them interesting for these kinds of applications: 

  • Firstly, they are non-volatile, with inherent memory – if you stick a magnet to your fridge, it stays put.
  • Brains (and brain-like computers) need to have non-linear responses, taking simple information and performing complicated transforms, and that’s something magnets are naturally good at.
  • There are plenty of ways to make magnets change state and perform computations that use very little energy.
  • And magnets are a well-established technology (used, for example, in hard drives and Magnetoresistive random-access memory (MRAM)), and so there are existing routes to technology integration.

XPEEM Highlights the Underlying Magnetic Dynamics

Key to this research is understanding what’s happening to these magnetic nanorings when they’re connected together – the way that emergence changes the way they change magnetic states.

Read more on Diamond website

New AI-driven tool streamlines experiments

Researchers at the Department of Energy’s SLAC National Accelerator Laboratory have demonstrated a new approach to peer deeper into the complex behavior of materials. The team harnessed the power of machine learning to interpret coherent excitations, collective swinging of atomic spins within a system. 

This groundbreaking research, published recently in Nature Communications, could make experiments more efficient, providing real-time guidance to researchers during data collection, and is part of a DOE-funded project led by Howard University including researchers at SLAC and Northeastern University to use machine learning to accelerate research in materials. 

The team created this new data-driven tool using “neural implicit representations,” a machine learning development used in computer vision and across different scientific fields such as medical imaging, particle physics and cryo-electron microscopy. This tool can swiftly and accurately derive unknown parameters from experimental data, automating a procedure that, until now, required significant human intervention.

Peculiar behaviors

Collective excitations help scientists understand the rules of systems, such as magnetic materials, with many parts. When seen at the smallest scales, certain materials show peculiar behaviors, like tiny changes in the patterns of atomic spins. These properties are key for many new technologies, such as advanced spintronics devices that could change how we transfer and store data. 

To study collective excitations, scientists use techniques such as inelastic neutron or X-ray scattering. However, these methods are not only intricate, but also resource-intensive given, for example, the limited availability of neutron sources. 

Machine learning offers a way to address these challenges, although even then there are limitations. Past experiments used machine learning techniques to enhance the accuracy of X-ray and neutron scattering data interpretation. These efforts relied on traditional image-based data representations. But the team’s new approach, using neural implicit representations, takes a different route. 

Read more on SLAC website