Probing neural language models for understanding of words of estimative probability - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2023

Probing neural language models for understanding of words of estimative probability

Résumé

Words of Estimative Probability (WEP) are phrases used to express the plausibility of a statement. Examples include terms like probably, maybe, likely, doubt, unlikely, and impossible. Surveys have shown that human evaluators tend to agree when assigning numerical probability levels to these WEPs. For instance, the term highly likely equates to a median probability of 0.90±0.08 according to a survey by Fagen-Ulmschneider (2015). In this study, our focus is to gauge the competency of neural language processing models in accurately capturing the consensual probability level associated with each WEP. Our first approach is utilizing the UNLI dataset (Chen et al., 2020), which links premises and hypotheses with their perceived joint probability p. From this, we craft prompts in the form: "[PREMISE]. [WEP], [HYPOTHESIS]." This allows us to evaluate whether language models can predict if the consensual probability level of a WEP aligns closely with p. In our second approach, we develop a dataset based on WEP-focused probabilistic reasoning to assess if language models can logically process WEP compositions. For example, given the prompt "[EVENTA] is likely. [EVENTB] is impossible.", a wellfunctioning language model should not conclude that [EVENTA&B] is likely. Through our study, we observe that both tasks present challenges to out-of-the-box English language models. However, we also demonstrate that fine-tuning these models can lead to significant and transferable improvements.
Fichier principal
Vignette du fichier
2023.starsem-1.41.pdf (286.92 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04290243 , version 1 (16-11-2023)

Licence

Paternité

Identifiants

Citer

Damien Sileo, Marie-Francine Moens. Probing neural language models for understanding of words of estimative probability. Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023), Jul 2023, Toronto, France. pp.469-476, ⟨10.18653/v1/2023.starsem-1.41⟩. ⟨hal-04290243⟩
11 Consultations
10 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More