Policy Improvement: Between Black-Box Optimization and Episodic Reinforcement Learning - Archive ouverte HAL
Communication Dans Un Congrès Année : 2013

Policy Improvement: Between Black-Box Optimization and Episodic Reinforcement Learning

Résumé

Policy improvement methods seek to optimize the parameters of a policy with respect to a utility function. There are two main approaches to performing this optimization: reinforcement learning (RL) and black-box optimization (BBO). In recent years, benchmark comparisons between RL and BBO have been made, and there have been several attempts to specify which approach works best for which types of problem classes. In this article, we make several contributions to this line of research by: 1) Classifying several RL algorithms in terms of their algorithmic properties. 2) Showing how the derivation of ever more powerful RL algorithms displays a trend towards BBO. 3) Continuing this trend by applying two modifications to the state-of-the-art PI2 algorithm, which yields an algorithm we denote PIBB. We show that PIBB is a BBO algorithm. 4) Demonstrating that PIBB achieves similar or better performance than PI2 on several evaluation tasks. 5) Analyzing why BBO outperforms RL on these tasks. Rather than making the case for BBO or RL - in general we expect their relative performance to depend on the task considered - we rather provide two algorithms in which such cases can be made, as the algorithms are identical in all respects except in being RL or BBO approaches to policy improvement.
Fichier non déposé

Dates et versions

hal-00922133 , version 1 (23-12-2013)

Identifiants

  • HAL Id : hal-00922133 , version 1

Citer

Freek Stulp, Olivier Sigaud. Policy Improvement: Between Black-Box Optimization and Episodic Reinforcement Learning. Journées Francophones Planification, Décision, et Apprentissage pour la conduite de systèmes, 2013, Lille, France. ⟨hal-00922133⟩
436 Consultations
0 Téléchargements

Partager

More