Comparing the Robustness of Humans and Deep Neural Networks on Facial Expression Recognition
Résumé
Emotion recognition, and more particularly facial expression recognition (FER), has been extensively used for various applications (e.g., human–computer interactions). The ability to automatically recognize facial expressions has been facilitated with recent progress in the fields of computer vision and artificial intelligence. Nonetheless, FER algorithms still seem to face difficulties with image degradations due to real-life conditions (e.g., because of image compression or transmission). In this paper, we propose to investigate the impact of different distortion configurations on a large number of images of faces on human performance, thanks to the conduct of a crowdsourcing experiment. We further compare human performance with two open-source FER algorithms. Results show that, overall, models are more sensitive to distortions than humans—even when fine-tuned. Furthermore, we broach the subject of annotation errors and bias which exist in several well-established datasets, and suggest approaches to improve the latter.
Origine | Fichiers éditeurs autorisés sur une archive ouverte |
---|---|
Licence |