From Decision Trees to Explained Decision Sets - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2023

From Decision Trees to Explained Decision Sets

Résumé

Recent work demonstrated that path explanation redundancy is ubiquitous in decision trees, i.e. most often paths in decision trees include literals that are redundant for explaining a prediction. The implication of this result is that decision trees must be explained. Nevertheless, there are applications of DTs where running an explanation algorithm is impractical. For example, in settings that are time or power constrained, running software algorithms for explaining predictions would be undesirable. Although the explanations for paths in DTs do not generally represent themselves a decision tree, this paper shows that one can construct a decision set from some of the decision tree explanations, such that the decision set is not only explained, but it also exhibits a number of properties that are critical for replacing the original decision tree.

Dates et versions

hal-04311132 , version 1 (28-11-2023)

Identifiants

Citer

Xuanxiang Huang, Joao Marques-Silva. From Decision Trees to Explained Decision Sets. 26th European Conference on Artificial Intelligence (ECAI 2023), European Association for Artificial Intelligence (EurAI); Polish Artificial Intelligence Society (PSSI), Sep 2023, Kraków, Poland. pp.1100 --1108, ⟨10.3233/FAIA230384⟩. ⟨hal-04311132⟩
50 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More