Word Representations in Factored Neural Machine Translation
Résumé
Translation into a morphologically rich language
requires a large output vocabulary to model various
morphological phenomena, which is a challenge for
neural machine translation architectures. To address this issue,
the present paper investigates the impact of having
two output factors with a system able to generate
separately two distinct representations of the target
words. Within this framework, we investigate
several word representations that correspond to
different distributions of morpho-syntactic information
across both factors. We report experiments for translation
from English into two morphologically rich languages,
Czech and Latvian, and show the importance of explicitly
modeling target morphology.
Origine | Fichiers éditeurs autorisés sur une archive ouverte |
---|
Loading...