On the simplicity to produce falsified deep learning results
Résumé
Built on the top of works on adversarial examples, I show the existence of smuggling examples: alterations of training examples (precomputed from test set) which then lead the training toward unfairly good weights. If from computer vision point of view this contribution is rather incremental, it is not from a social point of view. It is a clear warning message that falsification of the train/test paradigm is not just possible but easy with classic deep learning.
Origine | Fichiers produits par l'(les) auteur(s) |
---|
Loading...