On the Role of Randomization in Adversarially Robust Classification - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2023

On the Role of Randomization in Adversarially Robust Classification

Résumé

Deep neural networks are known to be vulnerable to small adversarial perturbations in test data. To defend against adversarial attacks, probabilistic classifiers have been proposed as an alternative to deterministic ones. However, literature has conflicting findings on the effectiveness of probabilistic classifiers in comparison to deterministic ones. In this paper, we clarify the role of randomization in building adversarially robust classifiers. Given a base hypothesis set of deterministic classifiers, we show the conditions under which a randomized ensemble outperforms the hypothesis set in adversarial risk, extending previous results. Additionally, we show that for any probabilistic binary classifier (including randomized ensembles), there exists a deterministic classifier hat outperforms it. Finally, we give an explicit description of the deterministic hypothesis set that contains such a deterministic classifier for many types of commonly used probabilistic classifiers, i.e. randomized ensembles and parametric/input noise injection.
Fichier principal
Vignette du fichier
neurips_2023.pdf (631.24 Ko) Télécharger le fichier
Licence : CC BY NC SA - Paternité - Pas d'utilisation commerciale - Partage selon les Conditions Initiales

Dates et versions

hal-04312028 , version 1 (28-11-2023)

Identifiants

  • HAL Id : hal-04312028 , version 1

Citer

Lucas Gnecco Heredia, Yann Chevaleyre, Benjamin Negrevergne, Laurent Meunier, Muni Sreenivas Pydi. On the Role of Randomization in Adversarially Robust Classification. Thirty-seventh Conference on Neural Information Processing Systems, NeurIPS 2023, Dec 2023, New Orleans (LA), United States. ⟨hal-04312028⟩
54 Consultations
38 Téléchargements

Partager

Gmail Facebook X LinkedIn More