Hallucinations in Large Multilingual Translation Models - Archive ouverte HAL
Article Dans Une Revue Transactions of the Association for Computational Linguistics Année : 2023

Hallucinations in Large Multilingual Translation Models

Résumé

Abstract Hallucinated translations can severely undermine and raise safety issues when machine translation systems are deployed in the wild. Previous research on the topic focused on small bilingual models trained on high-resource languages, leaving a gap in our understanding of hallucinations in multilingual models across diverse translation scenarios. In this work, we fill this gap by conducting a comprehensive analysis—over 100 language pairs across various resource levels and going beyond English-centric directions—on both the M2M neural machine translation (NMT) models and GPT large language models (LLMs). Among several insights, we highlight that models struggle with hallucinations primarily in low-resource directions and when translating out of English, where, critically, they may reveal toxic patterns that can be traced back to the training data. We also find that LLMs produce qualitatively different hallucinations to those of NMT models. Finally, we show that hallucinations are hard to reverse by merely scaling models trained with the same data. However, employing more diverse models, trained on different data or with different procedures, as fallback systems can improve translation quality and virtually eliminate certain pathologies.
Fichier principal
Vignette du fichier
tacl_a_00615.pdf (994.29 Ko) Télécharger le fichier
Origine Fichiers éditeurs autorisés sur une archive ouverte

Dates et versions

hal-04575109 , version 1 (14-05-2024)

Identifiants

Citer

Nuno M Guerreiro, Duarte M Alves, Jonas Waldendorf, Barry Haddow, Alexandra Birch, et al.. Hallucinations in Large Multilingual Translation Models. Transactions of the Association for Computational Linguistics, 2023, 11, pp.1500-1517. ⟨10.1162/tacl_a_00615⟩. ⟨hal-04575109⟩
20 Consultations
42 Téléchargements

Altmetric

Partager

More