High-dimensional Bayesian inference via the Unadjusted Langevin Algorithm
Résumé
We consider in this paper the problem of sampling a
high-dimensional probability distribution $\pi$ having a density
wrt the Lebesgue measure on $\mathbb{R}^d$, known up to a normalisation
factor $x \mapsto \mathrm{e}^{−U (x)} / \int_{\mathbb{R}^d} \mathrm{e}^{−U (y)}\mathrm{d}y$.
Such problem naturally occurs for example in Bayesian inference and
machine learning. Under the assumption that $U$ is continuously
differentiable, $\nabla U$ is globally Lipschitz and $U$ is strongly
convex, we obtain non-asymptotic bounds for the convergence to
stationarity in Wasserstein distance of order $2$ and total
variation distance of the sampling method based on the Euler
discretization of the Langevin stochastic differential equation, for
both constant and decreasing step sizes. The dependence on the
dimension of the state space of the obtained bounds is studied to
demonstrate the applicability of this method. The convergence of an
appropriately weighted empirical measure is also investigated and
bounds for the mean square error and exponential deviation
inequality are reported for functions which are either Lipchitz
continuous or measurable and bounded. An illustration to a Bayesian
inference for binary regression is presented.
Origine | Fichiers produits par l'(les) auteur(s) |
---|
Loading...