Layer-wise learning of deep generative models - Archive ouverte HAL
Pré-Publication, Document De Travail Année : 2013

Layer-wise learning of deep generative models

Résumé

When using deep, multi-layered architectures to build generative models of data, it is difficult to train all layers at once. We propose a layer-wise training procedure admitting a performance guarantee compared to the global optimum. It is based on an optimistic proxy of future performance, the best latent marginal. We interpret auto-encoders in this setting as generative models, by showing that they train a lower bound of this criterion. We test the new learning procedure against a state of the art method (stacked RBMs), and find it to improve performance. Both theory and experiments highlight the importance, when training deep architectures, of using an inference model (from data to hidden variables) richer than the generative model (from hidden variables to data).

Dates et versions

hal-00794302 , version 1 (25-02-2013)

Identifiants

Citer

Ludovic Arnold, Yann Ollivier. Layer-wise learning of deep generative models. 2013. ⟨hal-00794302⟩
215 Consultations
0 Téléchargements

Altmetric

Partager

More