Solving Bernoulli Rank-One Bandits with Unimodal Thompson Sampling - Archive ouverte HAL Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2019

Solving Bernoulli Rank-One Bandits with Unimodal Thompson Sampling

Résumé

Stochastic Rank-One Bandits (Katarya et al, (2017a,b)) are a simple framework for regret minimization problems over rank-one matrices of arms. The initially proposed algorithms are proved to have logarithmic regret, but do not match the existing lower bound for this problem. We close this gap by first proving that rank-one bandits are a particular instance of unimodal bandits, and then providing a new analysis of Unimodal Thompson Sampling (UTS), initially proposed by Paladino et al (2017). We prove an asymptotically optimal regret bound on the frequentist regret of UTS and we support our claims with simulations showing the significant improvement of our method compared to the state-of-the-art.
Fichier principal
Vignette du fichier
HaL.pdf (1.14 Mo) Télécharger le fichier
8-8-all.jpg (134.82 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-02396943 , version 1 (06-12-2019)
hal-02396943 , version 2 (17-02-2020)

Identifiants

Citer

Cindy Trinh, Emilie Kaufmann, Claire Vernade, Richard Combes. Solving Bernoulli Rank-One Bandits with Unimodal Thompson Sampling. 2019. ⟨hal-02396943v1⟩
235 Consultations
251 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More