On the simplicity to produce falsified deep learning results - Archive ouverte HAL
Pré-Publication, Document De Travail Année : 2018

On the simplicity to produce falsified deep learning results

Résumé

Built on the top of works on adversarial examples, I show the existence of smuggling examples: alterations of training examples (precomputed from test set) which then lead the training toward unfairly good weights. If from computer vision point of view this contribution is rather incremental, it is not from a social point of view. It is a clear warning message that falsification of the train/test paradigm is not just possible but easy with classic deep learning.
Fichier principal
Vignette du fichier
small.pdf (193.7 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-01676691 , version 1 (05-01-2018)
hal-01676691 , version 2 (17-01-2018)
hal-01676691 , version 3 (24-01-2018)
hal-01676691 , version 4 (30-01-2018)
hal-01676691 , version 5 (28-03-2018)
hal-01676691 , version 6 (02-10-2019)
hal-01676691 , version 7 (19-11-2019)

Identifiants

  • HAL Id : hal-01676691 , version 2

Citer

Adrien Chan-Hon-Tong. On the simplicity to produce falsified deep learning results. 2018. ⟨hal-01676691v2⟩
379 Consultations
435 Téléchargements

Partager

More