Adaptive reward-free exploration - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2021

Adaptive reward-free exploration

Emilie Kaufmann
Pierre Ménard
  • Fonction : Auteur
  • PersonId : 1022182
Omar Darwiche Domingues
  • Fonction : Auteur
Edouard Leurent
  • Fonction : Auteur
  • PersonId : 1038135
Michal Valko

Résumé

Reward-free exploration is a reinforcement learning setting recently studied by Jin et al., who address it by running several algorithms with regret guarantees in parallel. In our work, we instead propose a more adaptive approach for reward-free exploration which directly reduces upper bounds on the maximum MDP estimation error. We show that, interestingly, our reward-free UCRL algorithm can be seen as a variant of an algorithm of Fiechter from 1994 [11], originally proposed for a different objective that we call best-policy identification. We prove that RF-UCRL needs O (SAH 4 /ε 2) log(1/δ) episodes to output, with probability 1 − δ, an ε-approximation of the optimal policy for any reward function. We empirically compare it to oracle strategies using a generative model.
Fichier principal
Vignette du fichier
arxiv_rf.pdf (584.62 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-02864574 , version 1 (11-06-2020)

Identifiants

  • HAL Id : hal-02864574 , version 1

Citer

Emilie Kaufmann, Pierre Ménard, Omar Darwiche Domingues, Anders Jonsson, Edouard Leurent, et al.. Adaptive reward-free exploration. Algorithmic Learning Theory, 2021, Paris, France. ⟨hal-02864574⟩
116 Consultations
131 Téléchargements

Partager

Gmail Facebook X LinkedIn More