Predictive Explanations for and by Reinforcement Learning
Résumé
In order to understand a reinforcement learning (RL) agent's behavior within its environment, we propose an answer to `What is likely to happen?' in the form of a predictive explanation. It is composed of three scenarios: best-case, worst-case and most-probable which we show are computationally difficult to find (W[1]-hard). We propose linear-time approximations by considering the environment as a favorable/hostile/neutral RL agent. Experiments validate this approach. Furthermore, we give a dynamic-programming algorithm to find an optimal summary of a long scenario.
Origine | Fichiers produits par l'(les) auteur(s) |
---|