Unleashing the Power of Gradual Patterns for Explainable AI
Résumé
Ensemble models and deep neural networks (DNNs) demonstrate excellent results in classification tasks. However, their "black-box" nature prevents their widespread deployment and use in critical fields such as health. Explainable AI is a field to make black-box models more understandable by humans. In the literature, predictions are explained mostly in the form of feature attributions or counterfactuals based on neighborhood generated randomly or using genetics algorithms or with expert knowledge. In this paper, we show how gradual patterns can be used to generate more plausible neighborhoods without requiring expert knowledge, producing explanations better adapted to individual instances. We extend our post-hoc explainable AI framework with a comprehensive theoretical analysis and additional experimental results, comparing it with state-of-the-art methods such as LIME, LORE, and SHAP, and discuss practical implementation considerations.
Domaines
| Origine | Fichiers produits par l'(les) auteur(s) |
|---|---|
| Licence |