Efficient reinforcement learning with Fleming-Viot particle systems: application to stochastic networks with rarely observed rewards
Résumé
We consider reinforcement learning control problems under the expected reward criterion in which non-zero rewards are both sparse and rare, that is, they occur in very few states and have a very small stationary probability under all policies. In this context, usual discovery techniques including importance sampling are inapplicable because no policy exists that increases the visit frequency of the rare states. Using renewal theory and Fleming-Viot particle systems, we propose a novel approach that exploits prior knowledge on the sparse structure of the reward landscape to boost exploration of the rare non-zero rewards and achieve an accurate estimation of their stationary probability. We also demonstrate how to combine the methodology with policy gradient learning to construct the FVRL algorithm that efficiently solves control problems under these scenarios.
We provide theoretical guarantees of the convergence of both the stationary probability estimator and the policy gradient learner, and illustrate the method on two optimisation problems to maximize the expected reward: a simple M/M/1/K queue system where the blocking threshold K is optimised, and a two-job-class loss network where a threshold-type rejection policy is optimised. Our results show that FVRL learns the optimum thresholds much more efficiently than vanilla Monte-Carlo reinforcement learning.
Fichier principal
2023 - StochasticSystems - Mastropietro, Ayesta, Jonckheere, Majewski - Efficient Reinforcement Learning with Fleming-Viot Particle Systems.pdf (2.64 Mo)
Télécharger le fichier
Origine | Fichiers produits par l'(les) auteur(s) |
---|---|
Licence |