On the simplicity to produce falsified deep learning results
Résumé
Deep learning is such a breakthrough that deep networks may quickly be allowed to take critical decisions (diagnosis, really autonomous driving).
Yet, there will probably be strong discussions about the standards of evaluation of such system.
Indeed, if using private data allows a quite safe evaluation, it also raises business issues.
And, one could have been tempted to accept a business friendly evaluation mostly based on (apparent) good practices.
However, I show on this paper that such review based evaluation can be trivially hacked.
If from computer vision point of view, this contribution is rather incremental, it is not from a social point of view. It is an additional warning message about deep learning safety.
Origine | Fichiers produits par l'(les) auteur(s) |
---|
Loading...