Counterfactual Learning of Continuous Stochastic Policies
Résumé
Counterfactual reasoning from logged data has become increasingly important for many applications such as web advertising or healthcare. In this paper, we address the problem of counterfactual risk minimization (CRM) for learning a stochastic policy with continuous actions, whereas most existing work has focused on the discrete setting. Switching from discrete to continuous action spaces presents several difficulties as naive discretization strategies have been shown to perform poorly. To deal with this issue, we first introduce an effective contextual modelling strategy that learns a joint representation of contexts and actions based on positive definite kernels. Second, we empirically show that the optimization perspective of CRM is more important than previously thought, and we demonstrate the benefits of proximal point algorithms and differentiable estimators. Finally, we propose an evaluation protocol for offline policies in real-world logged systems, which is challenging since policies cannot be replayed on test data, and we release a new large-scale dataset along with multiple synthetic, yet realistic, evaluation setups.
Origine | Fichiers produits par l'(les) auteur(s) |
---|
Loading...