How Many Dimensions for your Latent Model? A Cross-Domain Perspective
Résumé
Latent representations are ubiquitous in data analytics and AI tasks. They are used as intermediary hidden models to go from a set of observations to decisions.
Victor Charpenay and Rodolphe Le Riche confront the perspectives of their domains about these intermediary vector representations.
They identify two antagonist purposes: while the latent variables of statistical models are used to ease computation, the hidden layers of neural networks are meant to capture non-trivial regularities in the observed data. The difference has consequences on the dimension of the latent feature space: looking for regularities implies finding an optimal contraction of the input data to a smaller latent space, in contrast to the infinite-dimensional vectors used in kernel based approaches.
Origine | Fichiers produits par l'(les) auteur(s) |
---|---|
Licence |