Learning Value Functions in Deep Policy Gradients using Residual Variance - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2021

Learning Value Functions in Deep Policy Gradients using Residual Variance

Résumé

Policy gradient algorithms have proven to be successful in diverse decision making and control tasks. However, these methods suffer from high sample complexity and instability issues. In this paper, we address these challenges by providing a different approach for training the critic in the actor-critic framework. Our work builds on recent studies indicating that traditional actor-critic algorithms do not succeed in fitting the true value function, calling for the need to identify a better objective for the critic. In our method, the critic uses a new state-value (resp. state-action-value) function approximation that learns the value of the states (resp. state-action pairs) relative to their mean value rather than the absolute value as in conventional actor-critic. We prove the theoretical consistency of the new gradient estimator and observe dramatic empirical improvement across a variety of continuous control tasks and algorithms. Furthermore, we validate our method in tasks with sparse rewards, where we provide experimental evidence and theoretical insights.
Fichier principal
Vignette du fichier
iclr_avec.pdf (3.02 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-02964174 , version 1 (12-10-2020)
hal-02964174 , version 2 (11-02-2021)
hal-02964174 , version 3 (15-03-2021)

Identifiants

Citer

Yannis Flet-Berliac, Reda Ouhamma, Odalric-Ambrym Maillard, Philippe Preux. Learning Value Functions in Deep Policy Gradients using Residual Variance. ICLR 2021 - International Conference on Learning Representations, May 2021, Vienna / Virtual, Austria. ⟨hal-02964174v3⟩
180 Consultations
121 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More