Combining techniques from different NN-based language models for machine translation
Résumé
This paper presents two improvements of language models based on Restricted Boltzmann
Machine (RBM) for large machine translation tasks. In contrast to other continuous space
approach, RBM based models can easily be integrated into the decoder and are able to directly
learn a hidden representation of the n-gram. Previous work on RBM-based language models do
not use a shared word representation and therefore, they might suffer of a lack of generalization
for larger contexts. Moreover, since the training step is very time consuming, they are only
used for quite small copora. In this work we add a shared word representation for the RBM-
based language model by factorizing the weight matrix. In addition, we propose an efficient
and tailored sampling algorithm that allows us to drastically speed up the training process.
Experiments are carried out on two German to English translation tasks and the results show
that the training time could be reduced by a factor of 10 without any drop in performance.
Furthermore, the RBM-based model can also be trained on large size corpora.