Bridging Human Concepts and Computer Vision for Explainable Face Verification - Archive ouverte HAL Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2024

Bridging Human Concepts and Computer Vision for Explainable Face Verification

Résumé

With Artificial Intelligence (AI) influencing the decision-making process of sensitive applications such as Face Verification, it is fundamental to ensure the transparency, fairness, and accountability of decisions. Although Explainable Artificial Intelligence (XAI) techniques exist to clarify AI decisions, it is equally important to provide interpretability of these decisions to humans. In this paper, we present an approach to combine computer and human vision to increase the explanation's interpretability of a face verification algorithm. In particular, we are inspired by the human perceptual process to understand how machines perceive face's human-semantic areas during face comparison tasks. We use Mediapipe, which provides a segmentation technique that identifies distinct human-semantic facial regions, enabling the machine's perception analysis. Additionally, we adapted two model-agnostic algorithms to provide human-interpretable insights into the decision-making processes.
Fichier principal
Vignette du fichier
main.pdf (11.74 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04416562 , version 1 (29-01-2024)

Identifiants

  • HAL Id : hal-04416562 , version 1

Citer

Miriam Doh, Caroline Mazini Rodrigues, Nicolas Boutry, Laurent Najman, Matei Mancas, et al.. Bridging Human Concepts and Computer Vision for Explainable Face Verification. 2024. ⟨hal-04416562⟩
63 Consultations
28 Téléchargements

Partager

Gmail Mastodon Facebook X LinkedIn More