Towards safety monitoring of ML-based perception tasks of autonomous systems
Résumé
Machine learning (ML) provides no guarantee of safe operation in safety-critical systems such as autonomous vehicles. ML decisions are based on data that tends to represent a partial and imprecise knowledge of the environment. Such probabilistic models can output wrong decisions even with 99% of confidence, potentially leading to catastrophic consequences. Moreover, modern ML algorithms such as deep neural networks (DNN) have a high level of uncertainty in their decisions, and their outcomes are not easily explainable. Therefore, a fault tolerance mechanism, such as a safety monitor (SM), should be applied to guarantee the property correctness of these systems. However, applying an SM for ML components can be complex in terms of detection and reaction. Thus, aiming at dealing with this challenging task, this work presents a benchmark architecture for testing ML components with SM, and the current work for dealing with specific ML threats. We also highlight the main issues regarding monitoring ML in safety-critical environments.
Domaines
Informatique [cs]Origine | Fichiers produits par l'(les) auteur(s) |
---|