A Classification of Anomaly Explanation Methods - Archive ouverte HAL
Communication Dans Un Congrès Année : 2021

A Classification of Anomaly Explanation Methods

Résumé

The usage of algorithms in real-world situations is strongly desired. But, in order to achieve that, final users need to be reassured that they can trust the outputs of algorithms. Building this trust requires algorithms not only to produce accurate results, but also to explain why they got those results. From this last problematic a new field has emerged: eXplainable Artificial Intelligence (XAI). Deep learning has greatly benefited from that field, especially for classification tasks. The considerable amount of works and surveys devoted to deep explanation methods can attest that. Other machine learning tasks, like anomaly detection, have received less attention when it comes to explaining the algorithms outputs. In this paper, we focus on anomaly explanation. Our contribution is a categorization of anomaly explanation methods and an analysis of the different forms anomaly explanations may take.
Fichier principal
Vignette du fichier
Anomaly_Explanation_AIMLAI (2).pdf (212.01 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03337036 , version 1 (07-09-2021)

Identifiants

  • HAL Id : hal-03337036 , version 1

Citer

Véronne Yepmo, Grégory Smits, Olivier Pivert. A Classification of Anomaly Explanation Methods. Advances in Interpretable Machine Learning and Artificial Intelligence (AIMLAI), Sep 2021, (Online), France. ⟨hal-03337036⟩
88 Consultations
438 Téléchargements

Partager

More