Topological Uncertainty: Monitoring trained neural networks through persistence of activation graphs - Archive ouverte HAL
Communication Dans Un Congrès Année : 2021

Topological Uncertainty: Monitoring trained neural networks through persistence of activation graphs

Théo Lacombe
  • Fonction : Auteur
  • PersonId : 1041088
Mathieu Carriere
Frédéric Chazal
Marc Glisse

Résumé

Although neural networks are capable of reaching astonishing performances on a wide variety of contexts, properly training networks on complicated tasks requires expertise and can be expensive from a computational perspective. In industrial applications, data coming from an open-world setting might widely differ from the benchmark datasets on which a network was trained. Being able to monitor the presence of such variations without retraining the network is of crucial importance. In this article, we develop a method to monitor trained neural networks based on the topological properties of their activation graphs. To each new observation, we assign a Topological Uncertainty, a score that aims to assess the reliability of the predictions by investigating the whole network instead of its final layer only, as typically done by practitioners. Our approach entirely works at a post-training level and does not require any assumption on the network architecture, optimization scheme, nor the use of data augmentation or auxiliary datasets; and can be faithfully applied on a large range of network architectures and data types. We showcase experimentally the potential of Topological Uncertainty in the context of trained network selection, Out-Of-Distribution detection, and shift-detection, both on synthetic and real datasets of images and graphs.
Fichier principal
Vignette du fichier
main.pdf (881.34 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03213188 , version 1 (07-05-2021)

Identifiants

Citer

Théo Lacombe, Yuichi Ike, Mathieu Carriere, Frédéric Chazal, Marc Glisse, et al.. Topological Uncertainty: Monitoring trained neural networks through persistence of activation graphs. IJCAI 2021 - 30th International Joint Conference on Artificial Intelligence, Aug 2021, Montréal, Canada. pp.2666-2672, ⟨10.24963/ijcai.2021/367⟩. ⟨hal-03213188⟩
184 Consultations
213 Téléchargements

Altmetric

Partager

More