On the simplicity to produce falsified deep learning results - Archive ouverte HAL
Pré-Publication, Document De Travail Année : 2018

On the simplicity to produce falsified deep learning results

Résumé

Deep learning is such a breakthrough that deep networks may quickly be allowed to take critical decisions (diagnosis, really autonomous driving). Yet, there will probably be strong discussions about the standards of evaluation of such system. Indeed, if using private data allows a quite safe evaluation, it also raises business issues. And, one could have been tempted to accept a business friendly evaluation mostly based on (apparent) good practices. However, I show on this paper that such review based evaluation can be trivially hacked. If from computer vision point of view, this contribution is rather incremental, it is not from a social point of view. It is an additional warning message about deep learning safety.
Fichier principal
Vignette du fichier
textonly.pdf (195.99 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-01676691 , version 1 (05-01-2018)
hal-01676691 , version 2 (17-01-2018)
hal-01676691 , version 3 (24-01-2018)
hal-01676691 , version 4 (30-01-2018)
hal-01676691 , version 5 (28-03-2018)
hal-01676691 , version 6 (02-10-2019)
hal-01676691 , version 7 (19-11-2019)

Identifiants

  • HAL Id : hal-01676691 , version 3

Citer

Adrien Chan-Hon-Tong. On the simplicity to produce falsified deep learning results. 2018. ⟨hal-01676691v3⟩
379 Consultations
435 Téléchargements

Partager

More