Masked Language Models as Stereotype Detectors? - Archive ouverte HAL Access content directly
Conference Papers Year :

Masked Language Models as Stereotype Detectors?

Modèles de Langue Masqués comme Détecteurs de Stéreotypes?

Abstract

Pretraining language models led to significant improvements for NLP tasks. However, recent studies confirmed that most language models exhibit a myriad of social biases related to different demographic variables such as gender, race, or religion. In this work, we exploit this implicit knowledge of stereotypes to create an end-to-end stereotype detector using solely a language model. Existing literature on quantifying social biases functions at model-level, evaluating trained models such as word embeddings, contextual sentence encoders, or co-reference resolution systems. In this work, we focus on measuring stereotypes at data-level, computing bias scores for natural language sentences and documents. We evaluate the effectiveness of our pipeline on publicly available benchmarks.
Fichier principal
Vignette du fichier
EDBT_2022___Masked_Language_Models_as_Stereotype_Detectors_.pdf (492.85 Ko) Télécharger le fichier
Origin : Publisher files allowed on an open archive

Dates and versions

hal-03626753 , version 1 (31-03-2022)

Identifiers

  • HAL Id : hal-03626753 , version 1

Cite

Yacine Gaci, Boualem Benatallah, Fabio Casati, Khalid Benabdeslem. Masked Language Models as Stereotype Detectors?. EDBT 2022, Mar 2022, Edinburgh, United Kingdom. ⟨hal-03626753⟩
127 View
174 Download

Share

Gmail Facebook Twitter LinkedIn More