Entropy-based burn in time analysis and ranking for (A)MCMC algorithms in high dimension
Abstract
Many recent and often (Adaptive) Markov Chain Monte Carlo (A)MCMC methods are associated in practice to unknown rates of convergence. We propose a simulation-based methodology to estimate and compare MCMC’s performance in terms of shortest burn in time, using a Kullback divergence criterion requiring an estimate of the entropy of the algorithm densities at each iteration, computed from iid simulated chains. In previous works, we proved some consistency results in MCMC setup for an entropy estimate based on Monte Carlo integration of a kernel density estimate proposed by [18], and we investigate an alternative Nearest Neighbor (NN) entropy estimate from [24]. This estimate has been used mostly in univariate situations until recently when entropy estimation in higher dimensions has been considered in other fields like neuroscience or system biology. Unfortunately, in higher dimensions, both estimators converge slowly with a noticeable bias. The present work goes several steps further, with bias reduction and automatic (A)MCMC burn in time analysis in mind. First, for bias reduction, we apply in our situation a “crossed NN-type” nonparametric estimate of the Kullback divergence between two densities, based on iid samples from each, introduced by [39, 40]. We prove the consistency of these entropy estimates under recent uniform control conditions, for the successive densities of a generic class of MCMC algorithm to which most of the methods proposed in the recent literature belong. Secondly, we propose an original solution based on a PCA for reducing relevant dimension and bias in even higher dimensions whenever PCA is efficient. Our algorithms for MCMC simulation and entropy estimation are progressively added to the R package EntropyMCMC taking advantage of recent advances in high performance (parallel) computing.