Non-asymptotic convergence analysis for the Unadjusted Langevin Algorithm - Archive ouverte HAL
Pré-Publication, Document De Travail Année : 2016

Non-asymptotic convergence analysis for the Unadjusted Langevin Algorithm

Résumé

Sampling distribution over high-dimensional state-space is a problem which has recently attracted a lot of research efforts; applications include Bayesian non-parametrics, Bayesian inverse problems and aggregation of estimators. All these problems boil down to sample a target distribution $\pi$ having a density w.r.t. the Lebesgue measure on $\mathbb{R}^d$, known up to a normalisation factor $x \mapsto \mathrm{e}^{-U(x)}/ \int_{\mathbb{R}^d}\mathrm{e}^{ −U (y)} \mathrm{d}y$ where $U$ is continuously differentiable with Lipschitz gradient. In this paper, we study a sampling technique based on the Euler discretization of the Langevin stochastic differential equation. Contrary to the Metropolis Adjusted Langevin Algorithm (MALA), we do not apply a Metropolis-Hastings correction. We obtain for both constant and decreasing step sizes in the Euler discretization, non-asymptotic bounds for the convergence to the target distribution $\pi$ in total variation distance. A particular attention is paid on the dependence on the dimension of the state space, to demonstrate the applicability of this method in the high dimensional setting, at least when $U$ is convex. These bounds improve and extend the results of (Dalalyan 2014).
Fichier principal
Vignette du fichier
main.pdf (443.73 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-01176132 , version 1 (17-07-2015)
hal-01176132 , version 2 (07-03-2016)
hal-01176132 , version 3 (19-12-2016)

Identifiants

Citer

Alain Durmus, Éric Moulines. Non-asymptotic convergence analysis for the Unadjusted Langevin Algorithm. 2016. ⟨hal-01176132v2⟩
587 Consultations
817 Téléchargements

Altmetric

Partager

More