Generative domain-adapted adversarial auto-encoder model for enhanced ultrasonic imaging applications
Résumé
In this study, we propose a class-conditioned Generative Adversarial Autoencoder (cGAAE) to improve the realism of simulated ultrasonic imaging techniques, in particular the Multi-modal Total Focusing Method (M-TFM), based on the availability of both simulated and experimental TFM images. In particular, this work studied the case of the inspection of a complex geometry block representative of weld-inspection problem based on ultrasonic multi-elements probe. The cGAAE is represented by a tailored learning schema, trained in a semi-supervised fashion on a labelled mixture of synthetic (class 0) and experimental (class 1) M-TFM images, obtained under different meaningful inspection set-ups parameters (i.e., the celerity of the transverse ultrasonic wave, the specimen back-wall slope and height, the flaw tilt and heigh). That is, the cGAAE schema consists in a combination of learning stages involving class-conditioned spatial-transformers and arbitrary style transfer endows the cGAAE of powerful generative features, such as quasi real-time generation of M-TFM images by sweep of the inspection parameters. We exploited the cGAAE model to improve the realism of simulated M-TFM images and enhance the accuracy of the inverse problem, aiming at estimating the inspection parameters based on experimental acquisitions.