On the simplicity to produce falsified deep learning results - Archive ouverte HAL
Preprints, Working Papers, ... Year : 2018

On the simplicity to produce falsified deep learning results

Abstract

Built on the top of works on adversarial examples, I show the existence of smuggling examples: alterations of training examples (precomputed from test set) which then lead the training toward unfairly good weights. If from computer vision point of view this contribution is rather incremental, it is not from a social point of view. It is a clear warning message that falsification of the train/test paradigm is not just possible but easy with classic deep learning.
Fichier principal
Vignette du fichier
small.pdf (190.75 Ko) Télécharger le fichier
Origin Files produced by the author(s)
Loading...

Dates and versions

hal-01676691 , version 1 (05-01-2018)
hal-01676691 , version 2 (17-01-2018)
hal-01676691 , version 3 (24-01-2018)
hal-01676691 , version 4 (30-01-2018)
hal-01676691 , version 5 (28-03-2018)
hal-01676691 , version 6 (02-10-2019)
hal-01676691 , version 7 (19-11-2019)

Identifiers

  • HAL Id : hal-01676691 , version 1

Cite

Adrien Chan-Hon-Tong. On the simplicity to produce falsified deep learning results. 2018. ⟨hal-01676691v1⟩
374 View
423 Download

Share

More