Assessing Word Importance Using Models Trained for Semantic Tasks - Archive ouverte HAL
Communication Dans Un Congrès Année : 2023

Assessing Word Importance Using Models Trained for Semantic Tasks

Dávid Javorský
  • Fonction : Auteur
  • PersonId : 1270386
Ondřej Bojar

Résumé

Many NLP tasks require to automatically identify the most significant words in a text. In this work, we derive word significance from models trained to solve semantic task: Natural Language Inference and Paraphrase Identification. Using an attribution method aimed to explain the predictions of these models, we derive importance scores for each input token. We evaluate their relevance using a so-called crosstask evaluation: Analyzing the performance of one model on an input masked according to the other model's weight, we show that our method is robust with respect to the choice of the initial task. Additionally, we investigate the scores from the syntax point of view and observe interesting patterns, e.g. words closer to the root of a syntactic tree receive higher importance scores. Altogether, these observations suggest that our method can be used to identify important words in sentences without any explicit word importance labeling in training.
Fichier principal
Vignette du fichier
2023.findings-acl.563.pdf (324.26 Ko) Télécharger le fichier
Origine Fichiers éditeurs autorisés sur une archive ouverte

Dates et versions

hal-04163044 , version 1 (17-07-2023)

Licence

Identifiants

  • HAL Id : hal-04163044 , version 1

Citer

Dávid Javorský, Ondřej Bojar, François Yvon. Assessing Word Importance Using Models Trained for Semantic Tasks. 61st Annual Meeting of the Association for Computational Linguistics (ACL 2023), ACL, Jul 2023, Toronto, Canada. pp.8846-8856. ⟨hal-04163044⟩
32 Consultations
48 Téléchargements

Partager

More