Input uncertainty propagation through trained neural networks - Archive ouverte HAL
Communication Dans Un Congrès Année : 2023

Input uncertainty propagation through trained neural networks

Résumé

When physical sensors are involved, such as image sensors, the uncertainty over the input data is often a major component of the output uncertainty of machine learning models. In this work, we address the problem of input uncertainty propagation through trained neural networks. We do not rely on a Gaussian distribution assumption of the output or of any intermediate layer. We propagate instead a Gaussian Mixture Model (GMM) that offers much more flexibility using the Split&Merge algorithm. This paper's main contribution is the computation of a Wasserstein criterion to control the Gaussian splitting procedure for which theoretical guarantees of convergence on the output distribution estimates are derived. The methodology is tested against a wide range of datasets and networks. It shows robustness, and genericity and offers highly accurate output probability density function estimation while maintaining a reasonable computational cost compared with the standard Monte Carlo (MC) approach. 1
Fichier principal
Vignette du fichier
2023-ICML-MCPMLP.pdf (9.12 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04435224 , version 1 (02-02-2024)

Identifiants

  • HAL Id : hal-04435224 , version 1

Citer

Paul Monchot, Loïc Coquelin, Sébastien J Petit, Sébastien Marmin, Erwan Le Pennec, et al.. Input uncertainty propagation through trained neural networks. International Conference on Machine Learning 2023, Aug 2023, Honolulu, United States. ⟨hal-04435224⟩
73 Consultations
45 Téléchargements

Partager

More