Non-asymptotic convergence analysis for the Unadjusted Langevin Algorithm
Résumé
Sampling distribution over high-dimensional state-space is a problem which has recently attracted a lot of research efforts; applications include Bayesian non-parametrics, Bayesian inverse problems and aggregation of estimators. All these problems boil down to sample a target distribution $\pi$ having a density w.r.t. the Lebesgue measure on $\mathbb{R}^d$, known up to a normalisation factor $x \mapsto \mathrm{e}^{-U(x)}/ \int_{\mathbb{R}^d}\mathrm{e}^{ −U (y)} \mathrm{d}y$ where $U$ is continuously differentiable with Lipschitz gradient. In this paper, we study a sampling technique based on the Euler discretization of the Langevin stochastic differential equation. Contrary to the Metropolis Adjusted Langevin Algorithm (MALA), we do not apply a Metropolis-Hastings correction. We obtain for both constant and decreasing step sizes in the Euler discretization, non-asymptotic bounds for the convergence to the target distribution $\pi$ in total variation distance. A particular attention is paid on the dependence on the dimension of the state space, to demonstrate the applicability of this method in the high dimensional setting, at least when $U$ is convex. These bounds improve and extend the results of (Dalalyan 2014).
Origine | Fichiers produits par l'(les) auteur(s) |
---|