Inverse Reinforcement Learning in Relational Domains - Archive ouverte HAL Access content directly
Conference Papers Year :

Inverse Reinforcement Learning in Relational Domains

Abstract

In this work, we introduce the first approach to the Inverse Reinforcement Learning (IRL) problem in relational domains. IRL has been used to recover a more compact representation of the expert policy leading to better generalization performances among different contexts. On the other hand, rela-tional learning allows representing problems with a varying number of objects (potentially infinite), thus provides more generalizable representations of problems and skills. We show how these different formalisms allow one to create a new IRL algorithm for relational domains that can recover with great efficiency rewards from expert data that have strong generalization and transfer properties. We evaluate our algorithm in representative tasks and study the impact of diverse experimental conditions such as : the number of demonstrations, knowledge about the dynamics, transfer among varying dimensions of a problem, and changing dynamics.
Fichier principal
Vignette du fichier
IJCAI2015_HAL.pdf (675.96 Ko) Télécharger le fichier
Origin : Files produced by the author(s)
Loading...

Dates and versions

hal-01154650 , version 1 (22-05-2015)

Identifiers

  • HAL Id : hal-01154650 , version 1

Cite

Thibaut Munzer, Bilal Piot, Matthieu Geist, Olivier Pietquin, Manuel Lopes. Inverse Reinforcement Learning in Relational Domains. International Joint Conferences on Artificial Intelligence, Jul 2015, Buenos Aires, Argentina. ⟨hal-01154650⟩
920 View
342 Download

Share

Gmail Facebook Twitter LinkedIn More