Machine learning enhances X-ray imaging of nanotextures

Using a combination of high-powered X-rays, phase-retrieval algorithms and machine learning, Cornell researchers revealed the intricate nanotextures in thin-film materials, offering scientists a new, streamlined approach to analyzing potential candidates for quantum computing and microelectronics, among other applications.

Scientists are especially interested in nanotextures that are distributed non-uniformly throughout a thin film because they can give the material novel properties. The most effective way to study the nanotextures is to visualize them directly, a challenge that typically requires complex electron microscopy and does not preserve the sample.

The new imaging technique detailed July 6 in the Proceedings of the National Academy of Sciences overcomes these challenges by using phase retrieval and machine learning to invert conventionally-collected X-ray diffraction data – such as that produced at the Cornell High Energy Synchrotron Source, where data for the study was collected – into real-space visualization of the material at the nanoscale.

The use of X-ray diffraction makes the technique more accessible to scientists and allows for imaging a larger portion of the sample, said Andrej Singer, assistant professor of materials science and engineering and David Croll Sesquicentennial Faculty Fellow in Cornell Engineering, who led the research with doctoral student Ziming Shao.

“Imaging a large area is important because it represents the true state of the material,” Singer said. “The nanotexture measured by a local probe could depend on the choice of the probed spot.”

Read more on the CHESS website

Artificial intelligence deciphers detector “clouds” to accelerate materials research

A machine learning algorithm automatically extracts information to speed up – and extend – the study of materials with X-ray pulse pairs.

X-rays can be used like a superfast, atomic-resolution camera, and if researchers shoot a pair of X-ray pulses just moments apart, they get atomic-resolution snapshots of a system at two points in time. Comparing these snapshots shows how a material fluctuates within a tiny fraction of a second, which could help scientists design future generations of super-fast computers, communications, and other technologies.

Resolving the information in these X-ray snapshots, however, is difficult and time intensive, so Joshua Turner, a lead scientist at the Department of Energy’s SLAC National Accelerator Center and Stanford University, and ten other researchers turned to artificial intelligence to automate the process. Their machine learning-aided method, published October 17 in Structural Dynamics, accelerates this X-ray probing technique, and extends it to previously inaccessible materials.

“The most exciting thing to me is that we can now access a different range of measurements, which we couldn’t before,” Turner said.

Handling the blob

When studying materials using this two-pulse technique, the X-rays scatter off a material and are usually detected one photon at a time. A detector measures these scattered photons, which are used to produce a speckle pattern – a blotchy image that represents the precise configuration of the sample at one instant in time. Researchers compare the speckle patterns from each pair of pulses to calculate fluctuations in the sample.

“However, every photon creates an explosion of electrical charge on the detector,” Turner said. “If there are too many photons, these charge clouds merge together to create an unrecognizable blob.” This cloud of noise means the researchers must collect tons of scattering data to yield a clear understanding of the speckle pattern.

“You need a lot of data to work out what’s happening in the system,” said Sathya Chitturi, a Ph.D. student at Stanford University who led this work. He is advised by Turner and coauthor Mike Dunne, director of the Linac Coherent Light Source (LCLS) X-ray laser at SLAC. 

Read more on the SLAC website

Image: A speckle pattern typical of the sort seen at LCLS’s detectors

Credit: Courtesy Joshua Turner

What drives rechargeable battery decay?

How quickly a battery electrode decays depends on properties of individual particles in the battery – at first. Later on, the network of particles matters more.

Rechargeable lithium-ion batteries don’t last forever – after enough cycles of charging and recharging, they’ll eventually go kaput, so researchers are constantly looking for ways to squeeze a little more life out of their battery designs.

Now, researchers at the Department of Energy’s SLAC National Accelerator Laboratory and colleagues from Purdue University, Virginia Tech, and the European Synchrotron Radiation Facility have discovered that the factors behind battery decay actually change over time. Early on, decay seems to be driven by the properties of individual electrode particles, but after several dozen charging cycles, it’s how those particles are put together that matters more.

“The fundamental building blocks are these particles that make up the battery electrode, but when you zoom out, these particles interact with each other,” said SLAC scientist Yijin Liu, a researcher at the lab’s Stanford Synchrotron Radiation Lightsource and a senior author on the new paper. Therefore, “if you want to build a better battery, you need to look at how to put the particles together.”

Read more on the SLAC website

Image: A piece of battery cathode after 10 charging cycles. A machine-learning feature detection and quantification algorithm allowed researchers to automatically single out the most severely damaged particles of interest, which are highlighted in the image.

Credit: Courtesy Yijin Liu/SLAC National Accelerator Laboratory

I am doing science that is more important than my sleep!

NSLS-II #LightSourceSelfie

Dan Olds is an associate physicist at Brookhaven National Laboratory where he works as a beamline scientist at NSLS-II. Dan’s research involves combining artificial intelligence and machine learning to perform real-time analysis on streaming data while beamline experiments are being performed. Often these new AI driven methods are critical to success during in situ studies of materials. These include next generational battery components, accident safe nuclear fuels, catalytic materials and other emerging technologies that will help us develop clean energy solutions to fight climate change.

Dan’s #LightSourceSelfie delves into what attracted him to this area of research, the inspiration he gets from helping users on the beamline and the addictive excitement that comes from doing science at 3am.

Game on: Science Edition

After AIs mastered Go and Super Mario, Brookhaven scientists have taught them how to “play” experiments at NSLS-II

Inspired by the mastery of artificial intelligence (AI) over games like Go and Super Mario, scientists at the National Synchrotron Light Source II (NSLS-II) trained an AI agent – an autonomous computational program that observes and acts – how to conduct research experiments at superhuman levels by using the same approach. The Brookhaven team published their findings in the journal Machine Learning: Science and Technology and implemented the AI agent as part of the research capabilities at NSLS-II.

As a U.S. Department of Energy (DOE) Office of Science User Facility located at DOE’s Brookhaven National Laboratory, NSLS-II enables scientific studies by more than 2000 researchers each year, offering access to the facility’s ultrabright x-rays. Scientists from all over the world come to the facility to advance their research in areas such as batteries, microelectronics, and drug development. However, time at NSLS-II’s experimental stations – called beamlines – is hard to get because nearly three times as many researchers would like to use them as any one station can handle in a day—despite the facility’s 24/7 operations.

“Since time at our facility is a precious resource, it is our responsibility to be good stewards of that; this means we need to find ways to use this resource more efficiently so that we can enable more science,” said Daniel Olds, beamline scientist at NSLS-II and corresponding author of the study. “One bottleneck is us, the humans who are measuring the samples. We come up with an initial strategy, but adjust it on the fly during the measurement to ensure everything is running smoothly. But we can’t watch the measurement all the time because we also need to eat, sleep and do more than just run the experiment.”

Read more on the Brookhaven website

Image: NSLS-II scientists, Daniel Olds (left) and Phillip Maffettone (right), are ready to let their AI agent level up the rate of discovery at NSLS-II’s PDF beamline.

Credit: Brookhaven National Lab