Active clustering with bandit feedback - Archive ouverte HAL
Pré-Publication, Document De Travail Année : 2024

Active clustering with bandit feedback

Résumé

We investigate the Active Clustering Problem (ACP). A learner interacts with an $N$-armed stochastic bandit with $d$-dimensional subGaussian feedback. There exists a hidden partition of the arms into $K$ groups, such that arms within the same group, share the same mean vector. The learner's task is to uncover this hidden partition with the smallest budget - i.e., the least number of observation - and with a probability of error smaller than a prescribed constant $\delta$. In this paper, (i) we derive a non-asymptotic lower bound for the budget, and (ii) we introduce the computationally efficient ACB algorithm, whose budget matches the lower bound in most regimes. We improve on the performance of a uniform sampling strategy. Importantly, contrary to the batch setting, we establish that there is no computation-information gap in the active setting.
Fichier principal
Vignette du fichier
0_main.pdf (565.73 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
Licence

Dates et versions

hal-04610780 , version 1 (14-06-2024)

Licence

Identifiants

  • HAL Id : hal-04610780 , version 1

Citer

Victor Thuot, Alexandra Carpentier, Christophe Giraud, Nicolas Verzelen. Active clustering with bandit feedback. 2024. ⟨hal-04610780⟩
77 Consultations
32 Téléchargements

Partager

More