Addressing Neural Network Robustness with Mixup and Targeted Labeling Adversarial Training - Archive ouverte HAL
Communication Dans Un Congrès ECCV 2020 RLQ Workshop Année : 2020

Addressing Neural Network Robustness with Mixup and Targeted Labeling Adversarial Training

Résumé

Despite their performance, Artificial Neural Networks are not reliable enough for most of industrial applications. They are sensitive to noises, rotations, blurs and adversarial examples. There is a need to build defenses that protect against a wide range of perturbations, covering the most traditional common corruptions and adversarial examples. We propose a new data augmentation strategy called M-TLAT and designed to address robustness in a broad sense. Our approach combines the Mixup augmentation and a new adversarial training algorithm called Targeted Labeling Adversarial Training (TLAT). The idea of TLAT is to interpolate the target labels of adversarial examples with the ground-truth labels. We show that M-TLAT can increase the robustness of image classifiers towards nineteen common corruptions and five adversarial attacks , without reducing the accuracy on clean samples.
Fichier principal
Vignette du fichier
camera_ready_paper.pdf (503.98 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-02925252 , version 1 (31-08-2020)

Identifiants

Citer

Alfred Laugros, Alice Caplier, Matthieu Ospici. Addressing Neural Network Robustness with Mixup and Targeted Labeling Adversarial Training. ECCV 2020 - 16th European Conference on Computer Vision, Aug 2020, Glasgow, United Kingdom. ⟨10.1007/978-3-030-68238-5_14⟩. ⟨hal-02925252⟩
137 Consultations
421 Téléchargements

Altmetric

Partager

More