Towards Probabilistic Safety Guarantees for Model-Free Reinforcement Learning
Résumé
Improving safety in model-free Reinforcement Learning is necessary if we expect to deploy such systems in safety-critical scenarios. However, most of the existing constrained Reinforcement Learning methods have no formal guarantees for their constraint satisfaction properties. In this paper, we show the theoretical formulation for a safety layer that encapsulates model epistemic uncertainty over a distribution of constraint model approximations and can provide probabilistic guarantees of constraint satisfaction.
Domaines
Informatique [cs]Origine | Fichiers produits par l'(les) auteur(s) |
---|