Learning Conditionally Untangled Latent Spaces using Fixed Point Iteration
Résumé
Normalizing flows (NFs) are powerful generative models that map arbitrary complex (ambient) distributions to simple (latent) ones such as the monomodal gaussian. Despite their ability in modeling and sampling highly nonlinear manifolds, NFs are less effective in assigning labels to the generated data. This stems from the insufficient expressivity of monomodal gaussians, and also the difficulty in learning multimodal distributions in the latent spaces. In this paper, we devise a multimodal NF-based approach suitable both for image generation and classification. The particularity of our method resides in its ability to design multimodal gaussian distributions as a part of NF training using an objective function that mixes a likelihood term and a Kullback-Leibler Divergence (KLD) criterion. The parameters of the trained gaussians (namely means and covariance matrices) are obtained as an interpretable fixed-point solution of this objective function. Besides, our proposed method avoids the overwhelming and sensitive process of tuning the learning rates as required by gradient descent. Extensive experiments conducted on different datasets, including CIFAR10, CIFAR100 and ImageNet, show competitive performances of our method against different baselines as well as the related work.
Domaines
Informatique [cs]Origine | Fichiers éditeurs autorisés sur une archive ouverte |
---|