Tamed Langevin sampling under weaker conditions - Archive ouverte HAL Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2024

Tamed Langevin sampling under weaker conditions

Résumé

Motivated by applications to deep learning which often fail standard Lipschitz smoothness requirements, we examine the problem of sampling from distributions that are not log-concave and are only weakly dissipative, with log-gradients allowed to grow superlinearly at infinity. In terms of structure, we only assume that the target distribution satisfies either a log-Sobolev or a Poincar\'e inequality and a local Lipschitz smoothness assumption with modulus growing possibly polynomially at infinity. This set of assumptions greatly exceeds the operational limits of the "vanilla" unadjusted Langevin algorithm (ULA), making sampling from such distributions a highly involved affair. To account for this, we introduce a taming scheme which is tailored to the growth and decay properties of the target distribution, and we provide explicit non-asymptotic guarantees for the proposed sampler in terms of the Kullback-Leibler (KL) divergence, total variation, and Wasserstein distance to the target distribution.
Fichier principal
Vignette du fichier
Main.pdf (719.79 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04629313 , version 1 (29-06-2024)

Licence

Identifiants

Citer

Iosif Lytras, Panayotis Mertikopoulos. Tamed Langevin sampling under weaker conditions. 2024. ⟨hal-04629313⟩
0 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Mastodon Facebook X LinkedIn More