Multi-armed bandit for stratified sampling: Application to numerical integration
Abstract
Contextual multi-armed bandits model decision problems, where the properties of the possible decisions are initially partially known, but may become better known as time passes. Such models have numerous applications and many algorithms have been proposed to provide approximate solutions. In this paper, we propose an algorithm for computing mul-tidimensional integration problems. Such problems are very common and can be solved using the Monte-Carlo method with the stratified sampling technique. This method consists in partitioning the integration domain then randomly sampling the partitions. Our algorithm considers the selection of the best partition to sample as a multi-armed bandit problem, which can be solved using the Upper Confidence Bound technique. We have experimented this approach for several integration problems and observed faster convergence rates.
Fichier principal
lepretre57.pdf (531.64 Ko)
Télécharger le fichier
taai2017_pres.pdf (7.67 Mo)
Télécharger le fichier
Origin | Files produced by the author(s) |
---|
Loading...