Predicate-based explanation of a Reinforcement Learning agent via action importance evaluation
Résumé
For the purpose of understanding the impact of a Reinforcement Learning (RL) agent's decisions on the satisfaction of a given arbitrary predicate, we present a method based on the evaluation of the importance of actions. This highlights to the user the most important action(s) (relative to the predicate) in a history of the agent's interactions with the environment. Having shown that calculating the importance of an action for a predicate to hold is #W[1]-hard, we propose a timesaving approximation. To do so, we use the most likely transitions in the environment. Experiments confirm the relevance of this approach.
Origine | Fichiers produits par l'(les) auteur(s) |
---|