Exponential Smoothing for Off-Policy Learning - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2023

Exponential Smoothing for Off-Policy Learning

Résumé

Off-policy learning (OPL) aims at finding improved policies from logged bandit data, often by minimizing the inverse propensity scoring (IPS) estimator of the risk. In this work, we investigate a smooth regularization for IPS, for which we derive a two-sided PAC-Bayes generalization bound. The bound is tractable, scalable, interpretable and provides learning certificates. In particular, it is also valid for standard IPS without making the assumption that the importance weights are bounded. We demonstrate the relevance of our approach and its favorable performance through a set of learning tasks. Since our bound holds for standard IPS, we are able to provide insight into when regularizing IPS is useful. Namely, we identify cases where regularization might not be needed. This goes against the belief that, in practice, clipped IPS often enjoys favorable performance than standard IPS in OPL.
Fichier principal
Vignette du fichier
Exponential_Smoothing_for_Off_Policy_Learning.pdf (2.68 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
Licence

Dates et versions

hal-04125076 , version 1 (11-06-2023)

Licence

Identifiants

  • HAL Id : hal-04125076 , version 1

Citer

Imad Aouali, Victor-Emmanuel Brunel, David Rohde, Anna Korba. Exponential Smoothing for Off-Policy Learning. 40th International Conference on Machine Learning (ICML 2023), Jul 2023, Honolulu, HI, United States. ⟨hal-04125076⟩
36 Consultations
18 Téléchargements

Partager

Gmail Mastodon Facebook X LinkedIn More