The MAKE-NMTViz System Description for the WMT23 Literary Task - Archive ouverte HAL
Communication Dans Un Congrès Année : 2023

The MAKE-NMTViz System Description for the WMT23 Literary Task

Sui He
  • Fonction : Auteur
  • PersonId : 1313027
Sadaf Mohseni
Jun Yang
  • Fonction : Auteur
  • PersonId : 1313029

Résumé

This paper describes the MAKE-NMT-Viz's submission to the WMT 2023 Literary task. As a primary submission, we fine-tune the mBART50 model using Train, Valid1, and Test1 as part of the GuoFeng corpus (Wang et al., 2023b).We followed similar training parameters to (Lee et al., 2022) when fine-tuning mBART50. For our contrastive1 submission, we used a context-aware NMT system based on the concatenation method (Lupo et al., 2022). The training was performed in two steps: (i) a traditional sentence-level transformer (Vaswani et al., 2017) was trained for 10 epochs using GeneralData, Test2, and Valid2; (ii) second, we fine-tuned such Transformer using documentlevel data, with 3-sentence concatenation as context, for 4 epochs using Train, Test1, and Valid1 data. We then compared the three translation outputs from an interdisciplinary perspective, investigating some of the effects of sentence-vs. document-based training. Computer scientists, translators and corpus linguists discussed the remaining linguistic issues for this discourse-level literary translation.
Fichier principal
Vignette du fichier
2023.wmt-1.30.pdf (435.16 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04299041 , version 1 (21-11-2023)

Identifiants

Citer

Fabien Lopez, Gabriela González, Damien Hansen, Mariam Nakhlé, Behnoosh Namdarzadeh, et al.. The MAKE-NMTViz System Description for the WMT23 Literary Task. Proceedings of the Eighth Conference on Machine Translation, Dec 2023, Singapore, France. pp.287-295, ⟨10.18653/v1/2023.wmt-1.30⟩. ⟨hal-04299041⟩
171 Consultations
78 Téléchargements

Altmetric

Partager

More