Quantitative analysis of cell organelles with artificial intelligence

The analysis of cryo-X-Ray-microscopy data still requires a lot of time. Scientists developed a convolutional neural network, which identifies structures at high accuracy within a few minutes.

BESSY II’s high-brilliance X-rays can be used to produce microscopic images with spatial resolution down to a few tens of nanometres. Whole cell volumes can be examined without the need for complex sample preparation as in electron microscopy. Under the X-ray microscope, the tiny cell organelles with their fine structures and boundary membranes appear clear and detailed, even in three dimensions. This makes cryo x-ray tomography ideal for studying changes in cell structures caused, for example, by external triggers. Until now, however, the evaluation of 3D tomograms has required largely manual and labour-intensive data analysis. To overcome this problem, teams led by computer scientist Prof. Dr. Frank Noé and cell biologist Prof. Dr. Helge Ewers (both from Freie Universität Berlin) have now collaborated with the X-ray microscopy department at HZB. The computer science team has developed a novel, self-learning algorithm. This AI-based analysis method is based on the automated detection of subcellular structures and accelerates the quantitative analysis of 3D X-ray data sets. The 3D images of the interior of biological samples were acquired at the U41 beamline at BESSY II.

“In this study, we have now shown how well the AI-based analysis of cell volumes works, using mammalian cells from cell cultures that have so-called filopodia,” says Dr Stephan Werner, an expert in X-ray microscopy at HZB. Mammalian cells have a complex structure with many different cell organelles, each of which has to fulfil different cellular functions. Filopodia are protrusions of the cell membrane and serve in particular for cell migration. “For cryo X-ray microscopy, the cell samples are first shock-frozen, so quickly that no ice crystals form inside the cell. This leaves the cells in an almost natural state and allows us to study the structural influence of external factors inside the cell,” Werner explains.

“Our work has already aroused considerable interest among experts,” says first author Michael Dyhr from Freie Universität Berlin. The neural network correctly recognises about 70% of the existing cell features within a very short time, thus enabling a very fast evaluation of the data set. “In the future, we could use this new analysis method to investigate how cells react to environmental influences such as nanoparticles, viruses or carcinogens much faster and more reliably than before,” says Dyhr.

Read more in the Proceedings of the National Academy of Sciences journal article

Image: The images show part of a frozen mammalian cell. On the left is a section from the 3D X-ray tomogram (scale: 2 μm). The right figure shows the reconstructed cell volume after applying the new AI-supported algorithm

Credit: HZB

New software based on Artificial Intelligence helps to interpret complex data

Experimental data is often not only highly dimensional, but also noisy and full of artefacts. This makes it difficult to interpret the data. Now a team at HZB has designed software that uses self-learning neural networks to compress the data in a smart way and reconstruct a low-noise version in the next step. This enables to recognise correlations that would otherwise not be discernible. The software has now been successfully used in photon diagnostics at the FLASH free electron laser at DESY. But it is suitable for very different applications in science.

More is not always better, but sometimes a problem. With highly complex data, which have many dimensions due to their numerous parameters, correlations are often no longer recognisable. Especially since experimentally obtained data are additionally disturbed and noisy due to influences that cannot be controlled.

Helping humans to interpret the data

Now, new software based on artificial intelligence methods can help: It is a special class of neural networks (NN) that experts call “disentangled variational autoencoder network (β-VAE)”. Put simply, the first NN takes care of compressing the data, while the second NN subsequently reconstructs the data. “In the process, the two NNs are trained so that the compressed form can be interpreted by humans,” explains Dr Gregor Hartmann. The physicist and data scientist supervises the Joint Lab on Artificial Intelligence Methods at HZB, which is run by HZB together with the University of Kassel.

Read more on the HZB website

Researchers reproduce the learning and forgetting functions of the brain with magnetic systems

A research led by the UAB has managed to emulate learning neuromorphic abilities using thin layers of cobalt oxide. The experiment, performed at the ALBA Synchrotron, is a new step towards brain-inspired computers.

  • The experiment, performed at the ALBA Synchrotron, is a new step towards brain-inspired computers

With the advent of Big Data, current computational architectures are proving to be insufficient. Difficulties in decreasing transistors’ size, large power consumption and limited operating speeds make neuromorphic computing a promising alternative.

Neuromorphic computing, a new brain-inspired computation paradigm, reproduce the activity of biological synapses by using artificial neural networks. Such devices work as a system of switches, so that the ON position corresponds to the information retention or ‘learning’, while the OFF position corresponds to the information deletion or ‘forgetting’.

In a recent publication, scientists from the Universitat Autònoma de Barcelona (UAB), the CNR-SPIN (Italy), the Catalan Institute of Nanoscience and Nanotechnology (ICN2), the Institute of Micro and Nanotechnology (IMN-CNM-CSIC) and the ALBA Synchrotron have explored the emulation of artificial synapses using new advanced material devices. The project was led by Serra Húnter Fellow Enric Menéndez and ICREA researcher Jordi Sort, both at the Department of Physics of the UAB, and is part of Sofia Martins PhD thesis.

A new approach to mimic synapse functions

Until now, most systems used for this purpose were ultimately controlled by electric currents, involving significant energy loss by heat dissipation. Here, researchers’ proposal was to use magneto-ionics, the non-volatile control of the magnetic properties of materials by voltage-driven ion migration, which drastically decreases power consumption and makes data storage energy-efficient.

Although heat dissipation decreases with ion migration effects, magneto-ionic motion of oxygen at room temperature is usually slow for industrial applications, involving several seconds or even minutes to toggle the magnetic state. To solve this problem, the team investigated the use of target materials whose crystal structure already contained the ions to be transported. Such magneto-ionic targets can undergo fully reversible transformations from a non-ferromagnetic (switch OFF) to a ferromagnetic (switch ON) state and vice versa just by the voltage-driven oxygen motion from the target towards a reservoir (ON) and vice versa (OFF).

Given their crystalline structures, cobalt oxides were the chosen materials for the fabrication of the films, ranging from 5nm to 230nm thick. They investigated the role of thickness on the resulting magneto-ionic behaviour, revealing that the thinner the films, the faster the generation of magnetization was reached.

X-ray absorption spectra (XAS) of the samples were performed at the BOREAS beamline of the ALBA Synchrotron. XAS was used to characterize, at room temperature, the elemental composition and oxidation state of the cobalt oxide films, which resulted to be different for the thinner and thickest films. These findings were crucial to understand the differences in the magneto-ionic motion of oxygen between the films.

Read more on the ALBA website

Image: BOREAS beamline of the ALBA Synchrotron

I am doing science that is more important than my sleep!

NSLS-II #LightSourceSelfie

Dan Olds is an associate physicist at Brookhaven National Laboratory where he works as a beamline scientist at NSLS-II. Dan’s research involves combining artificial intelligence and machine learning to perform real-time analysis on streaming data while beamline experiments are being performed. Often these new AI driven methods are critical to success during in situ studies of materials. These include next generational battery components, accident safe nuclear fuels, catalytic materials and other emerging technologies that will help us develop clean energy solutions to fight climate change.

Dan’s #LightSourceSelfie delves into what attracted him to this area of research, the inspiration he gets from helping users on the beamline and the addictive excitement that comes from doing science at 3am.

Physics on Autopilot

Brookhaven National Lab applies AI to make big experiments autonomous

As a young scientist experimenting with neutrons and X-rays, Kevin Yager often heard this mantra: “Don’t waste beamtime.” Maximizing productive use of the potent and popular facilities that generate concentrated particles and radiation frequently required working all night to complete important experiments. Yager, who now leads the Electronic Nanomaterials Group at Brookhaven National Laboratory’s Center for Functional Nanomaterials (CFN), couldn’t help but think “there must be a better way.”

Yager focused on streamlining and automating as much of an experiment as possible and wrote a lot of software to help. Then he had an epiphany. He realized artificial intelligence and machine-learning methods could be applied not only to mechanize simple and boring tasks humans don’t enjoy but also to reimagine experiments.

“Rather than having human scientists micromanaging experimental details,” he remembers thinking, “we could liberate them to actually focus on scientific insight, if only the machine could intelligently handle all the low-level tasks. In such a world, a scientific experiment becomes less about coming up with a sequence of steps, and more about correctly telling the AI what the scientific goal is.”

Yager and colleagues are developing methods that exploit AI and machine learning to automate as much of an experiment as possible. “This includes physically handling samples with robotics, triggering measurements, analyzing data, and – crucially – automating the experimental decision-making,” he explains. “That is, the instrument should decide by itself what sample to measure next, the measurement parameters to set, and so on.”

Read more on the Brookhaven website

Image: Example dataset collected during an autonomous X-ray scattering experiment at Brookhaven National Laboratory (BNL). An artificial intelligence/machine learning decision-making algorithm autonomously selected various points throughout the sample to measure. At each position, an X-ray scattering image (small squares) is collected and automatically analyzed. The algorithm considers the full dataset as it selects subsequent experiments.

Credit: Kevin Yager, BNL

AI Agent Helps Identify Material Properties Faster

High-throughput X-ray diffraction measurements generate huge amounts of data. The agent renders them usable more quickly.

Artificial intelligence (AI) can analyse large amounts of data, such as those generated when analysing the properties of potential new materials, faster than humans. However, such systems often tend to make definitive decisions even in the face of uncertainty; they overestimate themselves. An international research team has stopped AI from doing this: the researchers have refined an algorithm so that it works together with humans and supports decision-making processes. As a result, promising new materials can be identified more quickly.

A team headed by Dr. Phillip M. Maffettone (currently at National Synchrotron Light Source II in Upton, USA) and Professor Andrew Cooper from the Department of Chemistry and Materials Innovation Factory at the University of Liverpool joined forces with the Bochum-based group headed by Lars Banko and Professor Alfred Ludwig from the Chair of Materials Discovery and Interfaces and Yury Lysogorskiy from the Interdisciplinary Centre for Advanced Materials Simulation. The international team published their report in the journal Nature Computational Science from 19 April 2021.

Read more on the BNL website

Image: Daniel Olds (left) and Phillip M. Maffettone working at the beamline.

Credit: BNL

Game on: Science Edition

After AIs mastered Go and Super Mario, Brookhaven scientists have taught them how to “play” experiments at NSLS-II

Inspired by the mastery of artificial intelligence (AI) over games like Go and Super Mario, scientists at the National Synchrotron Light Source II (NSLS-II) trained an AI agent – an autonomous computational program that observes and acts – how to conduct research experiments at superhuman levels by using the same approach. The Brookhaven team published their findings in the journal Machine Learning: Science and Technology and implemented the AI agent as part of the research capabilities at NSLS-II.

As a U.S. Department of Energy (DOE) Office of Science User Facility located at DOE’s Brookhaven National Laboratory, NSLS-II enables scientific studies by more than 2000 researchers each year, offering access to the facility’s ultrabright x-rays. Scientists from all over the world come to the facility to advance their research in areas such as batteries, microelectronics, and drug development. However, time at NSLS-II’s experimental stations – called beamlines – is hard to get because nearly three times as many researchers would like to use them as any one station can handle in a day—despite the facility’s 24/7 operations.

“Since time at our facility is a precious resource, it is our responsibility to be good stewards of that; this means we need to find ways to use this resource more efficiently so that we can enable more science,” said Daniel Olds, beamline scientist at NSLS-II and corresponding author of the study. “One bottleneck is us, the humans who are measuring the samples. We come up with an initial strategy, but adjust it on the fly during the measurement to ensure everything is running smoothly. But we can’t watch the measurement all the time because we also need to eat, sleep and do more than just run the experiment.”

Read more on the Brookhaven website

Image: NSLS-II scientists, Daniel Olds (left) and Phillip Maffettone (right), are ready to let their AI agent level up the rate of discovery at NSLS-II’s PDF beamline.

Credit: Brookhaven National Lab

Smarter experiments for faster materials discovery

Scientists created a new AI algorithm for making measurement decisions; autonomous approach could revolutionize scientific experiments.

A team of scientists from the U.S. Department of Energy’s Brookhaven National Laboratory and Lawrence Berkeley National Laboratory designed, created, and successfully tested a new algorithm to make smarter scientific measurement decisions. The algorithm, a form of artificial intelligence (AI), can make autonomous decisions to define and perform the next step of an experiment. The team described the capabilities and flexibility of their new measurement tool in a paper published on August 14, 2019 in Nature Scientific Reports.

From Galileo and Newton to the recent discovery of gravitational waves, performing scientific experiments to understand the world around us has been the driving force of our technological advancement for hundreds of years. Improving the way researchers do their experiments can have tremendous impact on how quickly those experiments yield applicable results for new technologies.

>Read more on the NSLS-II at Brookhaven Lab website.

Image: (From left to right) Kevin Yager, Masafumi Fukuto, and Ruipeng Li prepared the Complex Materials Scattering (CMS) beamline at NSLS-II for a measurement using the new decision-making algorithm, which was developed by Marcus Noack (not pictured).

Students use AI for sample positioning at BioMAX

The samples at BioMAX beamline are very sensitive biomolecule crystals. It could, for example, be one of the many proteins you have in your body. They only last for a short time in the intense X-ray light before being damaged and needs to be placed exactly right before the researchers switch on the beam. In their masters’ project, Isak Lindhé, and Jonathan Schurmann have used methods of artificial intelligence to train the computer how to do it.

Hundreds of thousands of proteins
You have hundreds of thousands of different proteins in your body. They do everything from transporting oxygen in your blood to letting your cells take up nutrients after you’ve eaten or make your heart beat. And when things go wrong, you get prescribed medication. The pharmaceutical molecules connect to the proteins in your body to change how they work. To develop new pharmaceuticals with few side effects, the researchers, therefore, need to understand what different proteins look like in detail.

A tedious task
To get high-quality data from a sample it needs to be correctly positioned in the X-ray beam. The conventional model for finding the right position is to scan the sample in the beam to optimize the position. At MAX IV, the X-ray light is very intense, which is good because smaller crystals can be used. But at the same time, very often the sample can’t be scanned in the beam since it would be damaged long before the right position is found. The researchers, therefore, have to perform the rather tedious task of positioning it manually.

>Read more on the MAX IV Laboratory website

Scientists use machine learning to speed discovery of metallic glass

In a new report, they combine artificial intelligence and accelerated experiments to discover potential alternatives to steel in a fraction of the time.

Blend two or three metals together and you get an alloy that usually looks and acts like a metal, with its atoms arranged in rigid geometric patterns.

But once in a while, under just the right conditions, you get something entirely new: a futuristic alloy called metallic glass that’s amorphous, with its atoms arranged every which way, much like the atoms of the glass in a window. Its glassy nature makes it stronger and lighter than today’s best steel, plus it stands up better to corrosion and wear.

Even though metallic glass shows a lot of promise as a protective coating and alternative to steel, only a few thousand of the millions of possible combinations of ingredients have been evaluated over the past 50 years, and only a handful developed to the point that they may become useful.

Now a group led by scientists at the Department of Energy’s SLAC National Accelerator Laboratory, the National Institute of Standards and Technology (NIST) and Northwestern University has reported a shortcut for discovering and improving metallic glass – and, by extension, other elusive materials – at a fraction of the time and cost.

>Read more on the SLAC website

Image: Fang Ren, who developed algorithms to analyze data on the fly while a postdoctoral scholar at SLAC, at a Stanford Synchrotron Radiation Lightsource beamline where the system has been put to use.
Credit: Dawn Harmer/SLAC National Accelerator Laboratory