Iterative Adversarial Removal of Gender Bias in Pretrained Word Embeddings - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2022

Iterative Adversarial Removal of Gender Bias in Pretrained Word Embeddings

Résumé

Recent advances in Representation Learning have discovered a strong inclination for pre-trained word embeddings to demonstrate unfair and discriminatory gender stereotypes. These usually come in the shape of unjustified associations between representations of group words (e.g., male or female) and attribute words (e.g. driving, cooking, doctor, nurse, etc.) In this paper, we propose an iterative and adversarial procedure to reduce gender bias in word vectors. We aim to remove gender influence from word representations that should otherwise be free of it, while retaining meaningful gender information in words that are inherently charged with gender polarity (male or female). We confine these gender signals in a sub-vector of word embeddings to make them more interpretable. Quantitative and qualitative experiments confirm that our method successfully reduces gender bias in pre-trained word embeddings with minimal semantic offset.
Fichier principal
Vignette du fichier
SAC___Iterative_Adversarial_removal_of_gender_bias (1).pdf (834 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03626768 , version 1 (31-03-2022)

Identifiants

Citer

Yacine Gaci, Boualem Benatallah, Fabio Casati, Khalid Benabdeslem. Iterative Adversarial Removal of Gender Bias in Pretrained Word Embeddings. The 37th ACM/SIGAPP Symposium on Applied Computing (SAC ’22), Apr 2022, Prague (virtual), Czech Republic. pp.829-836, ⟨10.1145/3477314.3507274⟩. ⟨hal-03626768⟩
64 Consultations
318 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More