Theoretical evidence for adversarial robustness through randomization - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2019

Theoretical evidence for adversarial robustness through randomization

Résumé

This paper investigates the theory of robustness against adversarial attacks. It focuses on the family of randomization techniques that consist in injecting noise in the network at inference time. These techniques have proven effective in many contexts, but lack theoretical arguments. We close this gap by presenting a theoretical analysis of these approaches, hence explaining why they perform well in practice. More precisely, we make two new contributions. The first one relates the randomization rate to robustness to adversarial attacks. This result applies for the general family of exponential distributions, and thus extends and unifies the previous approaches. The second contribution consists in devising a new upper bound on the adversarial generalization gap of randomized neural networks. We support our theoretical claims with a set of experiments.
Fichier principal
Vignette du fichier
1902.01148.pdf (518.57 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-02892188 , version 1 (07-07-2020)

Identifiants

  • HAL Id : hal-02892188 , version 1

Citer

Rafael Pinot, Laurent Meunier, Alexandre Araujo, Hisashi Kashima, Florian Yger, et al.. Theoretical evidence for adversarial robustness through randomization. 33rd Conference on Neural Information Processing Systems (NIPS 2019), Dec 2019, Vancouver, Canada. ⟨hal-02892188⟩
76 Consultations
67 Téléchargements

Partager

Gmail Facebook X LinkedIn More