Near-Optimal Collaborative Learning in Bandits - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2022

Near-Optimal Collaborative Learning in Bandits

Sattar Vakili
  • Fonction : Auteur
  • PersonId : 1177651

Résumé

This paper introduces a general multi-agent bandit model in which each agent is facing a finite set of arms and may communicate with other agents through a central controller in order to identify-in pure exploration-or play-in regret minimizationits optimal arm. The twist is that the optimal arm for each agent is the arm with largest expected mixed reward, where the mixed reward of an arm is a weighted sum of the rewards of this arm for all agents. This makes communication between agents often necessary. This general setting allows to recover and extend several recent models for collaborative bandit learning, including the recently proposed federated learning with personalization [30]. In this paper, we provide new lower bounds on the sample complexity of pure exploration and on the regret. We then propose a near-optimal algorithm for pure exploration. This algorithm is based on phased elimination with two novel ingredients: a data-dependent sampling scheme within each phase, aimed at matching a relaxation of the lower bound.
Fichier principal
Vignette du fichier
RVK22.pdf (535.74 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03825099 , version 1 (21-10-2022)

Identifiants

Citer

Clémence Réda, Sattar Vakili, Emilie Kaufmann. Near-Optimal Collaborative Learning in Bandits. NeurIPS 2022 - 36th Conference on Neural Information Processing System, Dec 2022, New Orleans, United States. ⟨hal-03825099⟩
55 Consultations
44 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More