Unifying Evaluation of Machine Learning Safety Monitors - Archive ouverte HAL
Communication Dans Un Congrès Année : 2022

Unifying Evaluation of Machine Learning Safety Monitors

Résumé

With the increasing use of Machine Learning (ML) in critical autonomous systems, runtime monitors have been developed to detect prediction errors and keep the system in a safe state during operations. Monitors have been proposed for different applications involving diverse perception tasks and ML models, and specific evaluation procedures and metrics are used for different contexts. This paper introduces three unified safety-oriented metrics, representing the safety benefits of the monitor (Safety Gain), the remaining safety gaps after using it (Residual Hazard), and its negative impact on the system's performance (Availability Cost). To compute these metrics, one requires to define two return functions, representing how a given ML prediction will impact expected future rewards and hazards. Three use-cases (classification, drone landing, and autonomous driving) are used to demonstrate how metrics from the literature can be expressed in terms of the proposed metrics. Experimental results on these examples show how different evaluation choices impact the perceived performance of a monitor. As our formalism requires us to formulate explicit safety assumptions, it allows us to ensure that the evaluation conducted matches the high-level system requirements.
Fichier principal
Vignette du fichier
ISSRE2022.pdf (3.87 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03765273 , version 1 (31-08-2022)

Identifiants

Citer

Joris Guérin, Raul Sena Ferreira, Kevin Delmas, Jérémie Guiochet. Unifying Evaluation of Machine Learning Safety Monitors. 33rd IEEE International Symposium on Software Reliability Engineering (ISSRE 2022), Oct 2022, Charlotte, United States. ⟨10.1109/ISSRE55969.2022.00047⟩. ⟨hal-03765273⟩
151 Consultations
56 Téléchargements

Altmetric

Partager

More