Radon Sobolev Variational Auto-Encoders - Archive ouverte HAL Accéder directement au contenu
Article Dans Une Revue Neural Networks Année : 2021

Radon Sobolev Variational Auto-Encoders

Résumé

The quality of generative models (such as Generative adversarial networks and Variational Auto-Encoders) depends heavily on the choice of a good probability distance. However some popular metrics like the Wasserstein or the Sliced Wasserstein distances, the Jensen-Shannon divergence, the Kullback-Leibler divergence, lack convenient properties such as (geodesic) convexity, fast evaluation and so on. To address these shortcomings, we introduce a class of distances that have built-in convexity. We investigate the relationship with some known paradigms (sliced distances - a synonym for Radon distances -, reproducing kernel Hilbert spaces, energy distances). The distances are shown to possess fast implementations and are included in an adapted Variational Auto-Encoder termed Radon Sobolev Variational Auto-Encoder (RS-VAE) which produces high quality results on standard generative datasets.

Dates et versions

hal-03212930 , version 1 (30-04-2021)

Identifiants

Citer

Gabriel Turinici. Radon Sobolev Variational Auto-Encoders. Neural Networks, 2021, 141, pp.294-305. ⟨10.1016/j.neunet.2021.04.018⟩. ⟨hal-03212930⟩
65 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Mastodon Facebook X LinkedIn More