Detection of Adversarial Examples in Deep Neural Networks with Natural Scene Statistics - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2020

Detection of Adversarial Examples in Deep Neural Networks with Natural Scene Statistics

Résumé

Recent studies have demonstrated that the deep neural networks (DNNs) are vulnerable to carefully-crafted perturbations added to a legitimate input image. Such perturbed images are called adversarial examples (AEs) and can cause DNNs to misclassify. Consequently, it is of paramount importance to develop detection methods of AEs, thus allowing to reject them. In this paper, we propose to characterize the AEs through the use of natural scene statistics (NSS). We demonstrate that these statistical properties are altered by the presence of adversarial perturbations. Based on this finding, we propose three different methods that exploit these scene statistics to determine if an input is adversarial or not. The proposed detection methods have been evaluated against four prominent adversarial attacks and on three standards datasets. The experimental results have shown that the proposed methods achieve a high detection accuracy while providing a low false positive rate. © 2020 IEEE.
Fichier non déposé

Dates et versions

hal-03003468 , version 1 (13-11-2020)

Identifiants

Citer

A. Kherchouche, S.A. Fezza, Wassim Hamidouche, O. Déforges. Detection of Adversarial Examples in Deep Neural Networks with Natural Scene Statistics. 2020 International Joint Conference on Neural Networks, IJCNN 2020, Jul 2020, Glasgow, United Kingdom. pp.9206959, ⟨10.1109/IJCNN48605.2020.9206959⟩. ⟨hal-03003468⟩
126 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More