On the relevance of bandit algorithms in digital world
Résumé
In the digital world, more and more autonomous agents make automatic decisions to optimize a criterion by learning from their past decisions. Their rapid multiplication implies that these agents are going to interact with each other, whereas the algorithms they use were not necessarily designed for this. Unexpected emergent behaviors could therefore occur. In this thesis, we contribute to a first step towards the control of a large number of autonomous agents, which learn by interacting with an unknown environment, but also with other agents. We focus on bandit algorithms: an agent aims to minimize its regret with respect to the best possible policy by choosing the best actions. As the agent observes only the outcomes of the actions it has chosen, it must handle the exploration-exploitation dilemma: should one choose the action whose outcome is loosely estimated, or play the action whose result is empirically the best?
Our contributions begin with the study of bandits in evolving environment, which is often the case in applications. We then propose contributions on the applications of contextual bandits in the digital world: dynamic allocation of resources for cloud computing, optimization of marketing campaigns, or automatic dialogue. We then state the problem of massively decentralized bandits to handle a very common problem in the digital world, A/B testing, but with the constraint of guarantying privacy. Finally, for optimizing communications in Internet of Things, we propose the massively multiplayer bandits.
Origine | Fichiers produits par l'(les) auteur(s) |
---|