Context-Aware Neural Machine Translation Models Analysis And Evaluation Through Attention - Archive ouverte HAL
Article Dans Une Revue Revue TAL : traitement automatique des langues Année : 2024

Context-Aware Neural Machine Translation Models Analysis And Evaluation Through Attention

Résumé

Model explainability has recently become an active research field.Many works are published supporting or criticizing attention weights as model explanation. In this work we adhere to the former and analyze attention as explanation for Context-Aware Neural Machine Translation (CA-NMT). Since its evaluation often concerns the evaluation of models in resolving discourse phenomena ambiguity, we perform analyses and evaluations over coreference links in a parallel corpus. We propose a human evaluation over heatmaps, strengthened by a quantitative evaluation based on attention weights over coreference links and with different metrics purposely designed for this work. Such metrics provide a more explicit evaluation of the CA-NMT models than evaluations using contrastive test suites.
Fichier principal
Vignette du fichier
2024_RevueTAL_Explicabilite-CANMT.pdf (3.2 Mo) Télécharger le fichier
Origine Fichiers éditeurs autorisés sur une archive ouverte

Dates et versions

hal-04581509 , version 1 (21-05-2024)
hal-04581509 , version 2 (10-10-2024)

Identifiants

  • HAL Id : hal-04581509 , version 2

Citer

Marco Dinarelli, Dimitra Niaouri, Fabien Lopez, Gabriela Gonzalez-Saez, Mariam Nacklé, et al.. Context-Aware Neural Machine Translation Models Analysis And Evaluation Through Attention. Revue TAL : traitement automatique des langues, 2024, 64 (3). ⟨hal-04581509v2⟩
316 Consultations
125 Téléchargements

Partager

More