Baseline Transliteration Corpus for Improved English-Amharic Machine Translation
Résumé
Machine translation (MT) between English and Amharic is one of the least studied and, performance-wise, least successful topics in the MT field. We therefore propose to apply corpus transliteration and augmentation techniques in this study to address this issue and improve MT performance for the language pairs. This paper presents the creation, the augmentation, and the use of an Amharic to English transliteration corpus for NMT experiments. The created corpus has a total of 450,608 parallel sentences before preprocessing and is used to train three different NMT architectures after preprocessing. These models are actually built using Recurrent Neural Networks with attention mechanism (RNN), Gated Recurrent Units (GRUs), and Transformers. Specifically, for Transformer-based experiments, three different Transformer models with different hyperparameters are created. Compared to previous works, the BLEU score results of all NMT models used in this study are improved. One of the three Transformer models, in particular, achieves the highest BLEU score ever recorded for the language pairs.
Fichier principal
camera-ready-Baseline Transliteration Corpus for Improved English-Amharic Machine (2).pdf (815.13 Ko)
Télécharger le fichier
Origine | Fichiers produits par l'(les) auteur(s) |
---|