Theoretical Convergence Guarantees for Variational Autoencoders - Archive ouverte HAL
Pré-Publication, Document De Travail Année : 2024

Theoretical Convergence Guarantees for Variational Autoencoders

Résumé

Variational Autoencoders (VAE) are popular generative models used to sample from complex data distributions. Despite their empirical success in various machine learning tasks, significant gaps remain in understanding their theoretical properties, particularly regarding convergence guarantees. This paper aims to bridge that gap by providing non-asymptotic convergence guarantees for VAE trained using both Stochastic Gradient Descent and Adam algorithms. We derive a convergence rate of $\mathcal{O}(\log n / \sqrt{n})$, where $n$ is the number of iterations of the optimization algorithm, with explicit dependencies on the batch size, the number of variational samples, and other key hyperparameters. Our theoretical analysis applies to both Linear VAE and Deep Gaussian VAE, as well as several VAE variants, including $\beta$-VAE and IWAE. Additionally, we empirically illustrate the impact of hyperparameters on convergence, offering new insights into the theoretical understanding of VAE training.
Fichier principal
Vignette du fichier
SGBLC_2024.pdf (578.49 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04745733 , version 1 (21-10-2024)

Identifiants

  • HAL Id : hal-04745733 , version 1

Citer

Sobihan Surendran, Antoine Godichon-Baggioni, Sylvain Le Corff. Theoretical Convergence Guarantees for Variational Autoencoders. 2024. ⟨hal-04745733⟩
0 Consultations
0 Téléchargements

Partager

More