Least-squares methods for policy iteration - Archive ouverte HAL
Chapitre D'ouvrage Année : 2011

Least-squares methods for policy iteration

Résumé

Approximate reinforcement learning deals with the essential problem of applying reinforcement learning in large and continuous state-action spaces, by using function approximators to represent the solution. This chapter reviews least-squares methods for policy iteration, an important class of algorithms for approximate reinforcement learning. We discuss three techniques for solving the core, policy evaluation component of policy iteration, called: least-squares temporal difference, least-squares policy evaluation, and Bellman residual minimization. We introduce these techniques starting from their general mathematical principles and detailing them down to fully specified algorithms. We pay attention to online variants of policy iteration, and provide a numerical example highlighting the behavior of representative offline and online methods. For the policy evaluation component as well as for the overall resulting approximate policy iteration, we provide guarantees on the performance obtained asymptotically, as the number of samples processed and iterations executed grows to infinity. We also provide finite-sample results, which apply when a finite number of samples and iterations are considered. Finally, we outline several extensions and improvements to the techniques and methods reviewed.
Fichier principal
Vignette du fichier
lspi_chapter.pdf (878.23 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-00830122 , version 1 (04-06-2013)

Identifiants

  • HAL Id : hal-00830122 , version 1

Citer

Lucian Busoniu, Alessandro Lazaric, Mohammad Ghavamzadeh, Rémi Munos, Robert Babuska, et al.. Least-squares methods for policy iteration. Reinforcement Learning: State of the Art, Springer, pp.75-109, 2011. ⟨hal-00830122⟩
347 Consultations
761 Téléchargements

Partager

More