Optimizing Word Alignments with Better Subword Tokenization
Résumé
Word alignments identify translational correspondences between words in a parallel sentence pair and are used, for example, to train statistical machine translation, learn bilingual dictionaries or to perform quality estimation. Subword tokenization has become a standard preprocessing step for a large number of applications, notably for state-of-the-art open vocabulary machine translation systems. In this paper, we thoroughly study how this preprocessing step interacts with the word alignment task and propose several tokenization strategies to obtain well-segmented parallel corpora. Using these new techniques, we were able to improve baseline word-based alignment models for six language pairs.
Fichier principal
Optimizing Word Alignments with Better Subword Tokenization.pdf (622.26 Ko)
Télécharger le fichier
Origine | Fichiers produits par l'(les) auteur(s) |
---|