On the Tractability of Explaining Decisions of Classifiers - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2021

On the Tractability of Explaining Decisions of Classifiers

Résumé

Explaining decisions is at the heart of explainable AI. We investigate the computational complexity of providing a formally-correct and minimal explanation of a decision taken by a classifier. In the case of threshold (i.e. score-based) classifiers, we show that a complexity dichotomy follows from the complexity dichotomy for languages of cost functions. In particular, submodular classifiers allow tractable explanation of positive decisions, but not negative decisions (assuming P̸ =NP). This is an example of the possible asymmetry between the complexity of explaining positive and negative decisions of a particular classifier. Nevertheless, there are large families of classifiers for which explaining both positive and negative decisions is tractable, such as monotone or linear classifiers. We extend tractable cases to constrained classifiers (when there are constraints on the possible input vectors) and to the search for contrastive rather than abductive explanations. Indeed, we show that tractable classes coincide for abductive and contrastive explanations in the constrained or unconstrained settings.
Fichier principal
Vignette du fichier
LIPIcs-CP-2021-21Tractability.pdf (781.06 Ko) Télécharger le fichier
Origine Fichiers éditeurs autorisés sur une archive ouverte

Dates et versions

hal-03523350 , version 1 (12-01-2022)

Licence

Identifiants

Citer

Martin Cooper, Joao Marques-Silva. On the Tractability of Explaining Decisions of Classifiers. 27th International Conference on Principles and Practice of Constraint Programming (CP 2021), Oct 2021, Montpellier (en ligne), France. pp.21:1-21:18, ⟨10.4230/LIPIcs.CP.2021.21⟩. ⟨hal-03523350⟩
113 Consultations
92 Téléchargements

Altmetric

Partager

Gmail Mastodon Facebook X LinkedIn More