Combining techniques from different NN-based language models for machine translation - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2014

Combining techniques from different NN-based language models for machine translation

Résumé

This paper presents two improvements of language models based on Restricted Boltzmann Machine (RBM) for large machine translation tasks. In contrast to other continuous space approach, RBM based models can easily be integrated into the decoder and are able to directly learn a hidden representation of the n-gram. Previous work on RBM-based language models do not use a shared word representation and therefore, they might suffer of a lack of generalization for larger contexts. Moreover, since the training step is very time consuming, they are only used for quite small copora. In this work we add a shared word representation for the RBM- based language model by factorizing the weight matrix. In addition, we propose an efficient and tailored sampling algorithm that allows us to drastically speed up the training process. Experiments are carried out on two German to English translation tasks and the results show that the training time could be reduced by a factor of 10 without any drop in performance. Furthermore, the RBM-based model can also be trained on large size corpora.
Fichier non déposé

Dates et versions

hal-01908370 , version 1 (30-10-2018)

Identifiants

  • HAL Id : hal-01908370 , version 1

Citer

Jan Niehues, Alexandre Allauzen, François Yvon, Alexander Waibel. Combining techniques from different NN-based language models for machine translation. Conference of the Association for Machine Translation in the Americas, Yaser Al-Onaizan and Michel Simard, Jan 2014, Vancouver, Canada. ⟨hal-01908370⟩
128 Consultations
0 Téléchargements

Partager

Gmail Facebook X LinkedIn More