Towards Explainability in Using Deep Learning for Face Detection in Paintings - Archive ouverte HAL
Communication Dans Un Congrès Année : 2023

Towards Explainability in Using Deep Learning for Face Detection in Paintings

Résumé

Explainable Artificial Intelligence (XAI) is an active research area to interpret a neural network's decision by ensuring transparency and trust in the task-specified learned models. In fact, despite the great success of deep learning networks in many fields, their adoption by practitioners presents some limits, one significant of them is the complex nature of these networks which prevents human comprehension of the decision-making process. This is especially the case in artworks analysis. To address this issue, we explore Detector Randomized Input Sampling for Explanation (DRISE), a visualization method for explainable artificial intelligence to comprehend and improve CNN-based face detector on Tenebrism painting images. The results obtained show local explanations for model's prediction and consequently offer insights into the model's decision-making. This paper will be of great help to researchers as a future support for explainability of object detection in other domain application.
Fichier principal
Vignette du fichier
116703.pdf (22.77 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
Licence

Dates et versions

hal-04046620 , version 1 (26-03-2023)

Licence

Identifiants

Citer

Siwar Bengamra, Olfa Mzoughi, André Bigand, Ezzeddine Zagrouba. Towards Explainability in Using Deep Learning for Face Detection in Paintings. 12th International Conference on Pattern Recognition Applications and Methods - ICPRAM, Feb 2023, Lisbonne ( En ligne ), Portugal. pp.832-841, ⟨10.5220/0011670300003411⟩. ⟨hal-04046620⟩
163 Consultations
60 Téléchargements

Altmetric

Partager

More