Sample Efficient Deep Reinforcement Learning via Uncertainty Estimation - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2022

Sample Efficient Deep Reinforcement Learning via Uncertainty Estimation

Résumé

In model-free deep reinforcement learning (RL) algorithms, using noisy value estimates to supervise policy evaluation and optimization is detrimental to the sample efficiency. As this noise is heteroscedastic, its effects can be mitigated using uncertainty-based weights in the optimization process. Previous methods rely on sampled ensembles, which do not capture all aspects of uncertainty. We provide a systematic analysis of the sources of uncertainty in the noisy supervision that occurs in RL, and introduce inverse-variance RL, a Bayesian framework which combines probabilistic ensembles and Batch Inverse Variance weighting. We propose a method whereby two complementary uncertainty estimation methods account for both the Q-value and the environment stochasticity to better mitigate the negative impacts of noisy supervision. Our results show significant improvement in terms of sample efficiency on discrete and continuous control tasks.

Dates et versions

hal-04255203 , version 1 (23-10-2023)

Identifiants

Citer

Vincent Mai, Kaustubh Mani, Liam Paull. Sample Efficient Deep Reinforcement Learning via Uncertainty Estimation. International Conference on Learning Representations (ICLR 2022), Apr 2022, Virtual conference, Unknown Region. ⟨10.48550/arXiv.2201.01666⟩. ⟨hal-04255203⟩
45 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Mastodon Facebook X LinkedIn More