Reasoning with trees: interpreting CNNs using hierarchies - Archive ouverte HAL Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2024

Reasoning with trees: interpreting CNNs using hierarchies

Résumé

Challenges persist in providing interpretable explanations for neural network reasoning in explainable AI (xAI). Existing methods like Integrated Gradients produce noisy maps, and LIME, while intuitive, may deviate from the model’s reasoning. We introduce a framework that uses hierarchical segmentation techniques for faithful and interpretable explanations of Convolutional Neural Networks (CNNs). Our method constructs model-based hierarchical segmentations that maintain the model’s reasoning fidelity and allows both human-centric and model-centric segmentation. This approach offers multiscale explanations, aiding bias identification and enhancing understanding of neural network decision-making. Experiments show that our framework, xAiTrees, delivers highly interpretable and faithful model explanations, not only surpassing traditional xAI methods but shedding new light on a novel approach to enhancing xAI interpretability. Code at: https://github.com/CarolMazini/reasoning_with_trees .
Fichier principal
Vignette du fichier
main.pdf (9.49 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04614933 , version 1 (18-06-2024)

Identifiants

  • HAL Id : hal-04614933 , version 1

Citer

Caroline Mazini Rodrigues, Nicolas Boutry, Laurent Najman. Reasoning with trees: interpreting CNNs using hierarchies. 2024. ⟨hal-04614933⟩
0 Consultations
0 Téléchargements

Partager

Gmail Mastodon Facebook X LinkedIn More