Optimal Interpretability-Performance Trade-off of Classification Trees with Black-Box Reinforcement Learning - Archive ouverte HAL
Rapport (Rapport De Recherche) Année : 2023

Optimal Interpretability-Performance Trade-off of Classification Trees with Black-Box Reinforcement Learning

Résumé

Interpretability of AI models allows for user safety checks to build trust in these models. In particular, decision trees (DTs) provide a global view on the learned model and clearly outlines the role of the features that are critical to classify a given data. However, interpretability is hindered if the DT is too large. To learn compact trees, a Reinforcement Learning (RL) framework has been recently proposed to explore the space of DTs. A given supervised classification task is modeled as a Markov decision problem (MDP) and then augmented with additional actions that gather information about the features, equivalent to building a DT. By appropriately penalizing these actions, the RL agent learns to optimally trade-off size and performance of a DT. However, to do so, this RL agent has to solve a partially observable MDP. The main contribution of this paper is to prove that it is sufficient to solve a fully observable problem to learn a DT optimizing the interpretability-performance trade-off. As such any planning or RL algorithm can be used. We demonstrate the effectiveness of this approach on a set of classical supervised classification datasets and compare our approach with other interpretability-performance optimizing methods.
Fichier principal
Vignette du fichier
9503.pdf (1.1 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04060986 , version 1 (07-04-2023)

Identifiants

Citer

Hector Kohler, Riad Akrour, Philippe Preux. Optimal Interpretability-Performance Trade-off of Classification Trees with Black-Box Reinforcement Learning. RR-9503, Inria Lille Nord Europe - Laboratoire CRIStAL - Université de Lille. 2023. ⟨hal-04060986⟩
130 Consultations
74 Téléchargements

Altmetric

Partager

More