Policies for Contextual Bandit Problems with Count Payoffs.
Résumé
The contextual bandit problem has been of major interest in the last few years. This corresponds to a sequential decision process where an agent has to choose at each iteration an action to perform, according to some knowledge about the decision environment and the current available actions, with the aim to maximize a cumulative amount of rewards over time. Many instances of the problem exist, depending on the kind of rewards we collect - real, binary, natural - and various algorithms are known to be efficient for some of these instances, either empirically or theoretically. In this paper we focus on the case of count payoffs, which corresponds to bandit problems where rewards are integer rewards, potentially unbounded. Based on a Bayesian Poisson regression model, we propose two new contextual bandit algorithms for this particular case with several concrete applications in real life: an Upper Confidence Bound algorithm and a Thompson Sampling strategy. Our approaches present the advantage to remain analytically tractable and computationally efficient. We experiment the algorithms on both simulated data and a real world scenario of spread maximization on a social network.