Divergence-free continuous normalizing flows for uncertainty quantification
Résumé
Uncertainty quantification in ill-posed inverse problems is a critical issue in a variety of scientific domains, including among others signal processing, imaging science, geoscience, remote sensing.... This has led to a variety of approaches, especially using Bayesian schemes such as Kalman methods, particle filtering schemes and variational Bayesian inference. Dealing with non-linear and non-Gaussian processes remain however a challenge, especially when considering high-dimensional systems. Recently, normalizing flows using deep neural networks have emerged as a very powerful tool to train generative models, which can sample realistic states, while making feasible the computation of the likelihood function. Their exploitation for uncertainty quantification in terms of differential Shannon entropy however requires the use Monte Carlo methods whose computational cost can be prohibitive, especially for high-dimensional systems such as time, space and space-time processes. Here, we introduce a new class of continuous normalizing flows with a divergence-free constraint for the underlying governing ordinary differential equations. This divergence-free constraint results in the preservation of the differential Shannon entropy through the trained flows. We demonstrate the relevance of the proposed framework to reach state-of-the-art performance for generative modeling tasks. We also illustrate applications to uncertainty quantification for the reconstruction of 1D and 2D states from partial observations. We discuss further our main contributions and applications to real-world case-studies.
Fichier principal
benaichouche_divergence-free_normalizing_flows_2022.pdf (1003.81 Ko)
Télécharger le fichier
Origine | Fichiers produits par l'(les) auteur(s) |
---|