Improving Surrogate Model Prediction by Noise Injection into Autoencoder Latent Space
Résumé
Autoencoders (AEs) represent a powerful tool for enhancing data-driven surrogate modeling by learning a lower-dimensional representation of high-dimensional data in an encoding-reconstructing fashion. Variational autoencoders (VAEs) improve interpolation capabilities of autoencoders by structuring the latent space with the Kullback-Liebler regularization term. However, learning a VAE poses practical challenges due to the difficulties on balancing the quality of prediction and the interpolation capability. Thus, a compromise between AEs and VAEs is needed to deliver robust predictive models. In this paper, an effective strategy, consisting on the injection of noise into the latent space of AEs, is proposed to improve the smoothness of the latent space of autoencoders while preserving the quality of reconstruction. The experimental results show that the model with the proposed noise injection technique outperforms AEs, VAEs and other alternatives in terms of quality of predictions.