Direct Value Learning: a Rank-Invariant Approach to Reinforcement Learning - Archive ouverte HAL
Communication Dans Un Congrès Année : 2014

Direct Value Learning: a Rank-Invariant Approach to Reinforcement Learning

Résumé

Taking inspiration from inverse reinforcement learning, the proposed Direct Value Learning for Reinforcement Learning (DIVA) approach uses light priors to gener-ate inappropriate behaviors, and uses the corresponding state sequences to directly learn a value function. When the transition model is known, this value function directly defines a (nearly) optimal controller. Otherwise, the value function is extended to the state-action space using off-policy learning. The experimental validation of DIVA on the mountain car problem shows the robustness of the approach comparatively to SARSA, based on the assumption that the target state is known. The experimental validation on the bicycle problem shows that DIVA still finds good policies when relaxing this assumption.
Fichier principal
Vignette du fichier
Nips2014_workshop_DiVa.pdf (811.56 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-01090982 , version 1 (04-12-2014)

Licence

Domaine public

Identifiants

  • HAL Id : hal-01090982 , version 1

Citer

Basile Mayeur, Riad Akrour, Michèle Sebag. Direct Value Learning: a Rank-Invariant Approach to Reinforcement Learning. Autonomously Learning Robots, workshop at NIPS 2014, Gerhard Neumann (TU-Darmstadt) and Joelle Pineau (McGill University) and Peter Auer (Uni Leoben) and Marc Toussaint (Uni Stuttgart), Dec 2014, Montreal, Canada. ⟨hal-01090982⟩
318 Consultations
253 Téléchargements

Partager

More