Importance sampling methods for Bayesian discrimination between embedded models
Résumé
This paper surveys some well-established approaches on the approximation of Bayes factors used in Bayesian model choice, mostly as covered in Chen et al. (2000). Our focus here is on methods that are based on importance sampling strategies rather than variable dimension techniques like reversible jump MCMC, including: crude Monte Carlo, maximum likelihood based importance sampling, bridge and harmonic mean sampling, as well as Chib's method based on the exploitation of a functional equality. We demonstrate in this survey how these different methods can be efficiently implemented for testing the significance of a predictive variable in a probit model. Finally, we compare their performances on a real dataset.