Neural Adversarial Attacks with Random Noises
Résumé
In this paper, we present an approach which relies on the use of random noises to
generate adversarial examples of deep neural network classifiers. We argue that existing
deterministic attacks, which perform by sequentially applying maximal perturbations
on selected components of the input, fail at reaching accurate adversarial examples on
real-world large scale datasets. By exploiting a simple Taylor expansion of the expected
output probability under the noise perturbation, we introduce noise-based sparse (or L0)
targeted and untargeted attacks. Our proposed method, called Voting Folded Gaussian
Attack (VFGA), achieves significantly better L0 scores than state-of-the-art L0 attacks
(such as SparseFool and Sparse-RS) while being faster on both CIFAR-10 and ImageNet.
Moreover, we show that VFGA is also applicable as an L∞ attack and outperforms the
state-of-the-art projected gradient attack (PGD) method.
Origine | Fichiers produits par l'(les) auteur(s) |
---|