Bandit Algorithm for Both Unknown Best Position and Best Item Display on Web Pages - Archive ouverte HAL
Communication Dans Un Congrès Année : 2021

Bandit Algorithm for Both Unknown Best Position and Best Item Display on Web Pages

Résumé

Multiple-play bandits aim at displaying relevant items at relevant positions on a web page. We introduce a new bandit-based algorithm, PB-MHB, for online recommender systems which uses the Thompson sampling framework with Metropolis-Hastings approximation. This algorithm handles a display setting governed by the positionbased model. Our sampling method does not require as input the probability of a user to look at a given position in the web page which is difficult to obtain in some applications. Experiments on simulated and real datasets show that our method, with fewer prior information, delivers better recommendations than state-of-the-art algorithms.
Fichier principal
Vignette du fichier
IDA_2021_Paper___PB_MHB.pdf (1.13 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03163763 , version 1 (09-03-2021)

Identifiants

  • HAL Id : hal-03163763 , version 1

Citer

Camille-Sovanneary Gauthier, Romaric Gaudel, Elisa Fromont. Bandit Algorithm for Both Unknown Best Position and Best Item Display on Web Pages. IDA 2021 - 19th International Symposium on Intelligent Data Analysis, Apr 2021, Porto (virtual), Portugal. pp.12. ⟨hal-03163763⟩
301 Consultations
352 Téléchargements

Partager

More