Mitigation of gender bias in automatic facial non-verbal behaviors generation - Archive ouverte HAL
Pré-Publication, Document De Travail Année : 2024

Mitigation of gender bias in automatic facial non-verbal behaviors generation

Résumé

Research on non-verbal behavior generation for social interactive agents focuses mainly on the believability and synchronization of non-verbal cues with speech. However, existing models, predominantly based on deep learning architectures, often perpetuate biases inherent in the training data. This raises ethical concerns, depending on the intended application of these agents. This paper addresses these issues by first examining the influence of gender on facial non-verbal behaviors. We concentrate on gaze, head movements, and facial expressions. We introduce a classifier capable of discerning the gender of a speaker from their non-verbal cues. This classifier achieves high accuracy on both real behavior data, extracted using state-of-the-art tools, and synthetic data, generated from a model developed in previous work. Building upon this work, we present a new model, FairGenderGen, which integrates a gender discriminator and a gradient reversal layer into our previous behavior generation model. This new model generates facial non-verbal behaviors from speech features, mitigating gender sensitivity in the generated behaviors. Our experiments demonstrate that the classifier, developed in the initial phase, is no longer effective in distinguishing the gender of the speaker from the generated non-verbal behaviors.
Fichier principal
Vignette du fichier
icmi24d-sub1170-cam-i15.pdf (1.08 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04725479 , version 1 (08-10-2024)

Identifiants

Citer

Alice Delbosc, Magalie Ochs, Nicolas Sabouret, Brian Ravenet, Stephane Ayache. Mitigation of gender bias in automatic facial non-verbal behaviors generation. 2024. ⟨hal-04725479⟩
0 Consultations
0 Téléchargements

Altmetric

Partager

More