A cascaded supervised learning approach to inverse reinforcement learning - UMI 2958 - Axe de recherche : Computer Science Accéder directement au contenu
Communication Dans Un Congrès Année : 2013

A cascaded supervised learning approach to inverse reinforcement learning

Résumé

This paper considers the Inverse Reinforcement Learning (IRL) problem, that is inferring a reward function for which a demonstrated expert policy is optimal. We propose to break the IRL problem down into two generic Supervised Learning steps: this is the Cascaded Supervised IRL (CSI) approach. A classification step that defines a score function is followed by a regression step providing a reward function. A theoretical analysis shows that the demonstrated expert policy is nearoptimal for the computed reward function. Not needing to repeatedly solve a Markov Decision Process (MDP) and the ability to leverage existing techniques for classification and regression are two important advantages of the CSI approach. It is furthermore empirically demonstrated to compare positively to state-of-the-art approaches when using only transitions sampled according to the expert policy, up to the use of some heuristics. This is exemplified on two classical benchmarks (the mountain car problem and a highway driving simulator).
Fichier principal
Vignette du fichier
csi_irl.pdf (591.53 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-00869804 , version 1 (06-11-2017)

Identifiants

Citer

Edouard Klein, Bilal Piot, Matthieu Geist, Olivier Pietquin. A cascaded supervised learning approach to inverse reinforcement learning. Joint European Conference on Machine Learning and Knowledge Discovery in Databases (ECML/PKDD 2013), Sep 2013, Prague, Czech Republic. pp.1-16, ⟨10.1007/978-3-642-40988-2_1⟩. ⟨hal-00869804⟩

Collections

SUPELEC
237 Consultations
196 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More