Context-Aware Neural Machine Translation Models Analysis And Evaluation Through Attention - Archive ouverte HAL Accéder directement au contenu
Article Dans Une Revue Revue TAL : traitement automatique des langues Année : 2024

Context-Aware Neural Machine Translation Models Analysis And Evaluation Through Attention

Résumé

Model explainability has recently become an active research field.Many works are published supporting or criticizing attention weights as model explanation. In this work we adhere to the former and analyze attention as explanation for Context-Aware Neural Machine Translation (CA-NMT). Since its evaluation often concerns the evaluation of models in resolving discourse phenomena ambiguity, we perform analyses and evaluations over coreference links in a parallel corpus. We propose a human evaluation over heatmaps, strengthened by a quantitative evaluation based on attention weights over coreference links and with different metrics purposely designed for this work. Such metrics provide a more explicit evaluation of the CA-NMT models than evaluations using contrastive test suites.
Fichier principal
Vignette du fichier
ReveuTAL2023_Explicabilite-2.pdf (3.21 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04581509 , version 1 (21-05-2024)

Identifiants

  • HAL Id : hal-04581509 , version 1

Citer

Marco Dinarelli, Dimitra Niaouri, Fabien Lopez, Gabriela Gonzalez-Saez, Mariam Nacklé, et al.. Context-Aware Neural Machine Translation Models Analysis And Evaluation Through Attention. Revue TAL : traitement automatique des langues, In press. ⟨hal-04581509⟩
0 Consultations
0 Téléchargements

Partager

Gmail Mastodon Facebook X LinkedIn More