DOCTOR: A Simple Method for Detecting Misclassification Errors - Archive ouverte HAL
Communication Dans Un Congrès Année : 2021

DOCTOR: A Simple Method for Detecting Misclassification Errors

Résumé

Deep neural networks (DNNs) have shown to perform very well on large scale object recognition problems and lead to widespread use for real-world applications, including situations where DNN are implemented as "black boxes". A promising approach to secure their use is to accept decisions that are likely to be correct while discarding the others. In this work, we propose DOCTOR, a simple method that aims to identify whether the prediction of a DNN classifier should (or should not) be trusted so that, consequently, it would be possible to accept it or to reject it. Two scenarios are investigated: Totally Black Box (TBB) where only the soft-predictions are available and Partially Black Box (PBB) where gradient-propagation to perform input pre-processing is allowed. Empirically, we show that DOCTOR outperforms all state-of-the-art methods on various well-known images and sentiment analysis datasets. In particular, we observe a reduction of up to 4% of the false rejection rate (FRR) in the PBB scenario. DOCTOR can be applied to any pre-trained model, it does not require prior information about the underlying dataset and is as simple as the simplest available methods in the literature.
Fichier principal
Vignette du fichier
DOCTOR_1.pdf (11.3 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03624023 , version 1 (30-03-2022)
hal-03624023 , version 2 (22-06-2023)

Identifiants

  • HAL Id : hal-03624023 , version 2

Citer

Federica Granese, Marco Romanelli, Daniele Gorla, Catuscia Palamidessi, Pablo Piantanida. DOCTOR: A Simple Method for Detecting Misclassification Errors. Advances in Neural Information Processing Systems (NeurIPS), 2021, Virtual event, United States. pp.5669--5681. ⟨hal-03624023v2⟩
163 Consultations
144 Téléchargements

Partager

More