Preference-based Pure Exploration - Archive ouverte HAL
Communication Dans Un Congrès Année : 2024

Preference-based Pure Exploration

Résumé

We study the preference-based pure exploration problem for bandits with vector-valued rewards ordered using a preference cone $\mathcal{C}$ with the goal of identifying the most preferred policy over the set of arms. First, to quantify the impact of preferences, we derive a novel lower bound on the sample complexity for identifying the most preferred policy with confidence level $1-\delta$. Our lower bound elicits the role played by the geometry of the preference cone and punctuates the difference in hardness compared to best-arm variants of the problem. We further explicate this geometry when rewards follow a Gaussian distributions, and provide a convex reformulation of the lower bound. Then, we leverage this convex reformulation of the lower bound to design the Preference-based Track and Stop (PreTS) algorithm that identifies the most preferred policy. Finally, we derive a new concentration result for vector-valued rewards, and show that PreTS achieves a matching sample complexity upper bound.
Fichier principal
Vignette du fichier
2024_Preference_based_Pure_Exploration.pdf (435.47 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04733134 , version 1 (11-10-2024)
hal-04733134 , version 2 (05-12-2024)

Licence

Identifiants

  • HAL Id : hal-04733134 , version 1

Citer

Apurv Shukla, Debabrota Basu. Preference-based Pure Exploration. Advances in Neural Information Processing Systems (NeurIPS), Dec 2024, Vancouver (CA), Canada. ⟨hal-04733134v1⟩
57 Consultations
56 Téléchargements

Partager

More