Non-asymptotic convergence analysis for the Unadjusted Langevin Algorithm
Résumé
In this paper, we study a method to sample from a target
distribution $\pi$ over $\mathbb{R}^d$ having a positive density with
respect to the Lebesgue measure, known up to a
normalisation factor. This method is based on the Euler
discretization of the overdamped Langevin stochastic differential
equation associated with $\pi$. For both constant and decreasing
step sizes in the Euler discretization, we obtain non-asymptotic
bounds for the convergence to the target distribution $\pi$ in total
variation distance. A particular attention is paid to the dependency
on the dimension $d$, to demonstrate the
applicability of this method in the high dimensional setting. These
bounds improve and extend the results of (Dalalyan 2014).
Origine | Fichiers produits par l'(les) auteur(s) |
---|
Loading...