SENA: Similarity-based Error-checking of Neural Activations
Résumé
In this work, we propose SENA, a run-time monitor focused on detecting unreliable predictions from machine learning (ML) classifiers. The main idea is that instead of trying to detect when an image is out-of-distribution (OOD), which will not always result in a wrong output, we focus on detecting if the prediction from the ML model is not reliable, which will most of the time result in a wrong output, independently of whether it is in-distribution (ID) or OOD. The verification is done by checking the similarity between the neural activations of an incoming input and a set of representative neural activations recorded during training. SENA uses information from true-positive and false-negative examples collected during training to verify if a prediction is reliable or not. Our approach achieves results comparable to state-of-the-art solutions without requiring any prior OOD information and without hyperparameter tuning. Besides, the code is publicly available for easy reproducibility at https://github.com/raulsenaferreira/SENA.
Domaines
Apprentissage [cs.LG]Origine | Fichiers produits par l'(les) auteur(s) |
---|