Efficient learning in ABC algorithms
Résumé
Approximate Bayesian Computation has been successfully used in population genetics models to bypass the calculation of the likelihood. These algorithms provide an accurate estimator by comparing the observed dataset to a sample of datasets simulated from the model. Although parallelization is easily achieved, computation times for assuring a suitable approximation quality of the posterior distribution are still long. To alleviate this issue, we propose a sequential algorithm adapted from Del Moral et al. (2012) which runs twice as fast as traditional ABC algorithms. Its parameters are calibrated to minimize the number of simulations from the model.