Fairness and Privacy in Voice Biometrics: A Study of Gender Influences Using wav2vec 2.0
Résumé
This study investigates the impact of gender information on utility, privacy, and fairness in voice biometric systems, guided by the General Data Protection Regulation (GDPR) mandates, which underscore the need for minimizing the processing and storage of private and sensitive data, and ensuring fairness in automated decision-making systems. We adopt an approach that involves the fine-tuning of the wav2vec 2.0 model for speaker verification tasks, evaluating potential gender-related privacy vulnerabilities in the process. Gender influences during the finetuning process were employed to enhance fairness and privacy in order to emphasise or obscure gender information within the speakers' embeddings. Results from VoxCeleb datasets indicate our adversarial model increases privacy against uninformed attacks, yet slightly diminishes speaker verification performance compared to the non-adversarial model. However, the model's efficacy reduces against informed attacks. Analysis of system performance was conducted to identify potential gender biases, thus highlighting the need for further research to understand and improve the delicate interplay between utility, privacy, and equity in voice biometric systems.
Domaines
Informatique [cs]
Fichier principal
Fairness_and_Privacy_in_Voice_Biometrics__A_Study_of_Gender_Influences_Using_wav2vec_2_0.pdf (970.7 Ko)
Télécharger le fichier
Origine | Fichiers produits par l'(les) auteur(s) |
---|