Towards Probabilistic Safety Guarantees for Model-Free Reinforcement Learning - Archive ouverte HAL
Communication Dans Un Congrès Année : 2023

Towards Probabilistic Safety Guarantees for Model-Free Reinforcement Learning

Résumé

Improving safety in model-free Reinforcement Learning is necessary if we expect to deploy such systems in safety-critical scenarios. However, most of the existing constrained Reinforcement Learning methods have no formal guarantees for their constraint satisfaction properties. In this paper, we show the theoretical formulation for a safety layer that encapsulates model epistemic uncertainty over a distribution of constraint model approximations and can provide probabilistic guarantees of constraint satisfaction.
Fichier principal
Vignette du fichier
SAFECOMP_2023_paper_2861.pdf (211.75 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04191524 , version 1 (30-08-2023)

Identifiants

  • HAL Id : hal-04191524 , version 1

Citer

Felippe Schmoeller Roza, Karsten Roscher, Stephan Günnemann. Towards Probabilistic Safety Guarantees for Model-Free Reinforcement Learning. 42nd International Conference on Computer Safety, Reliability and Security (SAFECOMP 2023 ), Sep 2023, Toulouse, France. ⟨hal-04191524⟩

Collections

LAAS SAFECOMP2023
91 Consultations
96 Téléchargements

Partager

More