Computers have been able to process 2D images quickly for some time. Mobile phones can take digital pictures and operate in a variety of ways. However, much more difficult is processing the image in 3D and processing it in a timely manner. Mathematics is more complicated, and even supercomputers take time to calculate these numbers.
This is a challenge that a group of scientists at the US Department of Energy’s (DOE) Argonne National Laboratory are trying to overcome.Artificial intelligence has emerged as a versatile solution to the problems posed by big data process. For scientists processing 3D images using Advanced Photon Source (APS), Argonne’s DOE Science Department user facility, it is the key to transforming X-ray data into much faster, visually and understandable shapes. There is a possibility. Breakthroughs in this area can affect astronomy, electron microscopy, and other scientific areas that rely on large amounts of 3D data.
“To get the most out of the upgraded APS features, you need to reinvent data analytics. Current methods can’t handle it. Make the most of machine learning and go beyond what’s currently possible. You can do it. ” Mathew Cherukara at Argonne National Laboratory
A research team of scientists from three divisions at Argonne National Laboratory has developed a new computational framework called 3D-CDI-NN, which uses data collected by APS to make 3D vision hundreds of times faster than traditional methods. Showed that you can create a conversion.The team’s research was published in Applied Physics Review, Publication of the American Institute of Physics.
CDI is an abbreviation for coherent diffraction imaging, which is an X-ray technology that reflects an ultra-bright X-ray beam from a sample. These rays are collected as data by the detector and require a computational amount to convert that data into an image. Part of the challenge is that Mathew Cherukara, leader of the Computational X-ray Science Group at Argonne’s X-ray Science Department (XSD), explains that the detector captures only part of the information from the beam. increase.
However, the missing data contains important information, and scientists rely on computers to enter that information. As Cherukara points out, this takes time to do in 2D, but even more in 3D images.So the solution is to train artificial intelligence To recognize objects and the microscopic changes they receive directly from raw data, without having to enter the missing information.
To do this, the team starts with simulated X-ray data Neural network.. The framework title NN, Neural Networks, is a set of algorithms that can teach a computer to predict results based on received data. Henry Chan, the lead author of the treatise and a postdoctoral fellow at the Center for Nanoscale Materials (CNM), a user facility at the DOE Science Department at Argonne National Laboratory, led this part of the work.
“We used computer simulation to create crystals of various shapes and sizes and converted them into images and diffraction patterns for neural networks to learn,” says Chan. “The ease with which many realistic crystals can be quickly produced for training is an advantage of simulation.”
This work was done using the graphics processing unit resources of Argonne’s Joint Laboratory for System Evaluation, which introduced a state-of-the-art testbed to enable research on new high-performance computing platforms and features.
Once the network is trained, Stephan Hruszkewycz, a physicist and group leader in the materials sciences department at Argonne National Laboratory, says the network can approach the correct answer very quickly. However, there is still room for improvement, so the 3D-CDI-NN framework includes the process of getting the network to the rest. Hruszkewycz worked with Saugat Kandel, a graduate student at Northwestern University, on this aspect of the project. This reduces the need for time-consuming and iterative procedures.
“The Department of Materials Science is interested in coherent diffraction because it allows us to see the material on a scale that is about one-hundredth of the width of human hair with X-rays that penetrate the environment,” Hruszkewycz said. Mr. says. “This treatise is a demonstration of these advanced methods and greatly facilitates the imaging process. I want to know what a material is and how it changes over time, so when making measurements , Helps to create better images. “”
As a final step, the ability of 3D-CDI-NN to fill in the missing information and perform 3D visualization, the actual X-ray data of small gold particles collected by APS beamline 34-ID-C. I tested it with. As a result, simulation data is hundreds of times faster, and actual APS data is about the same speed. Testing has also shown that the network can reconstruct the image with less data than is normally needed to correct the information that was not captured by the detector.
According to Chan, the next step in this study is to integrate the network into the APS workflow so that it learns from the data as it is retrieved. He said that if the network learns from beamline data, it will continue to improve.
For this team, this survey also has a time component. As Cherukara points out, a major upgrade of APS is underway, and once the project is complete, the amount of data currently generated will grow exponentially. The upgraded APS produces an X-ray beam that is up to 500 times brighter, significantly improving beam coherence, a characteristic of light that allows it to be diffracted in a way that encodes more information about the sample.
This means that it takes a couple of minutes to collect coherent diffraction imaging data from a sample and acquire an image, but the data collection part of the process will soon be up to 500 times faster. The process of converting that data into usable images also needs to be hundreds of times faster than in the current situation.
“To get the most out of the upgraded APS features, we need to reinvent data analytics,” says Cherukara. “Our current method is not enough to catch up. Machine learning can maximize and exceed what is currently possible.”
In addition to Chan, Cherukara, and Hruszkewycz, the authors of this treatise include Subramanian Sankaranarayanan and Ross Harder of the Forest of Argonne. SLAC National Accelerator Laboratory Youssef Nashed; Saugat Kandel of Northwestern University.
Henry Chan et al, High-speed 3D nanoscale coherent imaging with physics-aware deep learning, Applied Physics Review (2021). DOI: 10.1063 / 5.0031486
Argonne National Laboratory
Quote: Currently 3D: Deep learning technology is 3D (2021) X-ray data acquired from https: //phys.org/news/2021-07-3d-deep-techniques-visualize- on July 26, 2021. Useful for visualization in (July 26, 2014) x-ray.html
This document is subject to copyright. No part may be reproduced without written permission, except for fair transactions for personal investigation or research purposes. The content is provided for informational purposes only.
Deep learning technology helps to visualize X-ray data in 3D
Source link Deep learning technology helps to visualize X-ray data in 3D