New Algorithms for Multiplayer Bandits when Arm Means Vary Among Players - Archive ouverte HAL
Pré-Publication, Document De Travail Année : 2019

New Algorithms for Multiplayer Bandits when Arm Means Vary Among Players

Résumé

We study multiplayer stochastic multi-armed bandit problems in which the players cannot communicate, and if two or more players pull the same arm, a collision occurs and the involved players receive zero reward. Moreover, we assume each arm has a different mean for each player. Let $T$ denote the number of rounds. An algorithm with regret $O((\log T)^{2+\kappa})$ for any constant $\kappa$ was recently presented by Bistritz and Leshem (NeurIPS 2018), who left the existence of an algorithm with $O(\log T)$ regret as an open question. In this paper, we provide an affirmative answer to this question in the case when there is a unique optimal assignment of players to arms. For the general case we present an algorithm with expected regret $O((\log T)^{1+\kappa})$, for any $\kappa>0$.
Fichier principal
Vignette du fichier
KM19.pdf (245.46 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-02006069 , version 1 (04-02-2019)
hal-02006069 , version 2 (05-06-2019)
hal-02006069 , version 3 (03-03-2020)

Identifiants

Citer

Emilie Kaufmann, Abbas Mehrabian. New Algorithms for Multiplayer Bandits when Arm Means Vary Among Players. 2019. ⟨hal-02006069v1⟩
419 Consultations
413 Téléchargements

Altmetric

Partager

More