Towards AI-automated TEM
Résumé
Transmission electron microscopes, like other scientific instruments, are becoming more and more complex. Take, for example, the I2TEM in Toulouse, a dedicated TEM for electron holography and in-situ studies (HF-3300 C from Hitachi) which has a cold-field emission gun, 9 lenses, 4 apertures, 4 biprisms, 18 pivot points to align and almost as many elements in the corrector. Operation involves over one hundred configurable parameters but with approximately 10^300 possible configurations, one wonders if the instrument is used to its highest capability.
To address this complexity, we first developed full computer control of the microscope. Hitachi supplied the access to every single element (even aperture positions, deflector currents and alignment) and details of the communication protocol. This allowed us to develop dynamic automation of the microscope to stabilize the specimen and hologram alignment through traditional control and feedback loops in real-time. But to go further, we wondered if the computer could take complete control of the microscope according to the user’s needs using artificial intelligence (AI).
Machine Learning, such as Convolutional Neural Networks (CNN), are gradually replacing older forms of automation in other sectors. We created an API to automatically change the microscope parameters whilst acquiring images. This enabled us to create training datasets for matching the configuration to the images produced by the microscope. Because most configurations would not create an image on the screen, we first aligned the TEM and then randomly shifted the parameters around their respective values. This allowed us to predict image characteristics based on the microscope configuration, as well as configurations that satisfied specific image characteristics. In parallel, we have developed a realistic simulation of the I2TEM to produce a dataset of virtual experiments.
Users are primarily interested in what one can call meta-parameters, such as beam size, beam position, focus and magnification, rather than the microscope configuration itself. Control of the microscope should therefore be in these terms. We use a variational auto-encoder for this, which allows us to encode a picture into only a few parameters, including limitations to ensure that they are intelligible by humans. Then, the user can manipulate the encoded image and we can train a fully connected model to predict a configuration in which the output image has a similar encoding.
The aim is to integrate the whole solution into an application relying on reinforcement learning to allow the microscopist to first set the desired image parameters, then let the model iteratively try to find the best configuration to obtain those meta-parameters on the encoded output of the microscope. We will present results for some test cases of practical use.
Origine | Fichiers produits par l'(les) auteur(s) |
---|