Neural Adversarial Attacks with Random Noises - Archive ouverte HAL
Article Dans Une Revue International Journal on Artificial Intelligence Tools Année : 2022

Neural Adversarial Attacks with Random Noises

Résumé

In this paper, we present an approach which relies on the use of random noises to generate adversarial examples of deep neural network classifiers. We argue that existing deterministic attacks, which perform by sequentially applying maximal perturbations on selected components of the input, fail at reaching accurate adversarial examples on real-world large scale datasets. By exploiting a simple Taylor expansion of the expected output probability under the noise perturbation, we introduce noise-based sparse (or L0) targeted and untargeted attacks. Our proposed method, called Voting Folded Gaussian Attack (VFGA), achieves significantly better L0 scores than state-of-the-art L0 attacks (such as SparseFool and Sparse-RS) while being faster on both CIFAR-10 and ImageNet. Moreover, we show that VFGA is also applicable as an L∞ attack and outperforms the state-of-the-art projected gradient attack (PGD) method.
Fichier principal
Vignette du fichier
Adversarial_attacks_by_random_noises.pdf (478.06 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03878176 , version 1 (21-12-2022)

Identifiants

Citer

Hatem Hajri, Manon Cesaire, Lucas Schott, Sylvain Lamprier, Patrick Gallinari. Neural Adversarial Attacks with Random Noises. International Journal on Artificial Intelligence Tools, 2022, ⟨10.1142/S0218213023600102⟩. ⟨hal-03878176⟩
77 Consultations
288 Téléchargements

Altmetric

Partager

More