Masked Language Models as Stereotype Detectors?
Modèles de Langue Masqués comme Détecteurs de Stéreotypes ?
Résumé
Pretraining language models led to significant improvements for NLP tasks. However, recent studies confirmed that most language models exhibit a myriad of social biases related to different demographic variables such as gender, race, or religion. In this work, we exploit this implicit knowledge of stereotypes to create an end-to-end stereotype detector using solely a language model. Existing literature on quantifying social biases functions at model-level, evaluating trained models such as word embeddings, contextual sentence encoders, or co-reference resolution systems. In this work, we focus on measuring stereotypes at data-level, computing bias scores for natural language sentences and documents. We evaluate the effectiveness of our pipeline on publicly available benchmarks.
Domaines
Informatique [cs]
Fichier principal
EDBT_2022___Masked_Language_Models_as_Stereotype_Detectors_.pdf (492.85 Ko)
Télécharger le fichier
Origine | Fichiers éditeurs autorisés sur une archive ouverte |
---|