Entropy-based convergence analysis for (A)MCMC algorithms in high dimension
Résumé
Many recent and often (Adaptive) Markov Chain Monte Carlo (A)MCMC methods are associated in practice to unknown rates of convergence. We propose a simulation-based methodology to estimate and compare MCMC's performance, using a Kullback divergence criterion requiring an estimate of the entropy of the algorithm densities at each iteration, computed from iid simulated chains. In previous works, we proved some consistency results in MCMC setup for an entropy estimate based on Monte-Carlo integration of a kernel density estimate proposed by Györfi and Van Der Meulen (1989), and we investigate an alternative Nearest Neighbor (NN) entropy estimate from Kozachenko and Leonenko (1987). This estimate has been used mostly in univariate situations until recently when entropy estimation in higher dimensions has been considered in other fields like neuroscience or system biology. Unfortunately, in higher dimensions, both estimators converge slowly with a noticeable bias. The present work goes several steps further, with bias reduction and automatic (A)MCMC convergence criterion in mind. First, we apply in our situation a recent, "crossed NN-type" nonparametric estimate of the Kullback divergence between two densities, based on iid samples from each, introduced by Wang et al. (2006, 2009). We prove the consistency of these entropy estimates under recent uniform control conditions, for the successive densities of a generic class of MCMC algorithm to which most of the methods proposed in the recent literature belong. Secondly, we propose an original solution based on a PCA for reducing relevant dimension and bias in even higher dimensions. All our algorithms for MCMC simulation and entropy estimation are implemented in an R package taking advantage of recent advances in high performance (parallel) computing.
Origine | Fichiers produits par l'(les) auteur(s) |
---|
Loading...