AAEGAN Loss Optimizations Supporting Data Augmentation on Cerebral Organoid Bright-Field Images
Résumé
Cerebral Organoids (CO) are brain-like structures that are paving the way to promising alternatives to in vivo
models for brain structure analysis. Available microscopic image databases of CO cultures contain only a
few tens of images and are not widespread due to their recency. However, developing and comparing reliable
analysis methods, be they semi-automatic or learning-based, requires larger datasets with a trusted ground
truth. We extend a small database of bright-field CO using an Adversarial Autoencoder(AAEGAN) after
comparing various Generative Adversarial Network (GAN) architectures. We test several loss variations,
by metric calculations, to overcome the generation of blurry images and to increase the similitude between
original and generated images. To observe how the optimization could enrich the input dataset in variability,
we perform a dimensional reduction by t-distributed Stochastic Neighbor Embedding (t-SNE). To highlight a
potential benefit effect of one of these optimizations we implement a U-Net segmentation task with the newly
generated images compared to classical data augmentation strategies. The Perceptual wasserstein loss prove
to be an efficient baseline for future investigations of bright-field CO database augmentation in term of quality
and similitude. The segmentation is the best perform when training step include images from this generative
process. According to the t-SNE representation we have generated high quality images which enrich the
input dataset regardless of loss optimization. We are convinced each loss optimization could bring a different
information during the generative process that are still yet to be discovered.