A Study of the Plausibility of Attention between RNN Encoders in Natural Language Inference - Archive ouverte HAL
Communication Dans Un Congrès Année : 2021

A Study of the Plausibility of Attention between RNN Encoders in Natural Language Inference

Résumé

Attention maps in neural models for NLP are appealing to explain the decision made by a model, hopefully emphasizing words that justify the decision. While many empirical studies hint that attention maps can provide such justification from the analysis of sound examples, only a few assess the plausibility of explanations based on attention maps, i.e., the usefulness of attention maps for humans to understand the decision. These studies furthermore focus on text classification. In this paper, we report on a preliminary assessment of attention maps in a sentence comparison task, namely natural language inference. We compare the cross-attention weights between two RNN encoders with human-based and heuristic-based annotations on the eSNLI corpus. We show that the heuristic reasonably correlates with human annotations and can thus facilitate evaluation of plausible explanations in sentence comparison tasks. Raw attention weights however remain only loosely related to a plausible explanation.
Fichier principal
Vignette du fichier
ICMLA_2021.pdf (1.4 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03372669 , version 1 (11-10-2021)

Identifiants

  • HAL Id : hal-03372669 , version 1

Citer

Duc Hau Nguyen, Guillaume Gravier, Pascale Sébillot. A Study of the Plausibility of Attention between RNN Encoders in Natural Language Inference. ICMLA 2021 - 20th IEEE International Conference on Machine Learning and Applications, Dec 2021, Pasadena, United States. pp.1-7. ⟨hal-03372669⟩
138 Consultations
172 Téléchargements

Partager

More