Masked Language Models as Stereotype Detectors? - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2022

Masked Language Models as Stereotype Detectors?

Modèles de Langue Masqués comme Détecteurs de Stéreotypes ?

Fabio Casati
  • Fonction : Auteur
  • PersonId : 1105989
Khalid Benabdeslem
  • Fonction : Auteur
  • PersonId : 1105990

Résumé

Pretraining language models led to significant improvements for NLP tasks. However, recent studies confirmed that most language models exhibit a myriad of social biases related to different demographic variables such as gender, race, or religion. In this work, we exploit this implicit knowledge of stereotypes to create an end-to-end stereotype detector using solely a language model. Existing literature on quantifying social biases functions at model-level, evaluating trained models such as word embeddings, contextual sentence encoders, or co-reference resolution systems. In this work, we focus on measuring stereotypes at data-level, computing bias scores for natural language sentences and documents. We evaluate the effectiveness of our pipeline on publicly available benchmarks.
Fichier principal
Vignette du fichier
EDBT_2022___Masked_Language_Models_as_Stereotype_Detectors_.pdf (492.85 Ko) Télécharger le fichier
Origine : Fichiers éditeurs autorisés sur une archive ouverte

Dates et versions

hal-03626753 , version 1 (31-03-2022)

Identifiants

  • HAL Id : hal-03626753 , version 1

Citer

Yacine Gaci, Boualem Benatallah, Fabio Casati, Khalid Benabdeslem. Masked Language Models as Stereotype Detectors?. EDBT 2022, Mar 2022, Edinburgh, United Kingdom. ⟨hal-03626753⟩
181 Consultations
297 Téléchargements

Partager

Gmail Facebook X LinkedIn More