Towards Instance-Optimality in Online PAC Reinforcement Learning - Archive ouverte HAL Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2023

Towards Instance-Optimality in Online PAC Reinforcement Learning

Résumé

Several recent works have proposed instance-dependent upper bounds on the number of episodes needed to identify, with probability 1 − δ, an ε-optimal policy in finite-horizon tabular Markov Decision Processes (MDPs). These upper bounds feature various complexity measures for the MDP, which are defined based on different notions of sub-optimality gaps. However, as of now, no lower bound has been established to assess the optimality of any of these complexity measures, except for the special case of MDPs with deterministic transitions. In this paper, we propose the first instance-dependent lower bound on the sample complexity required for the PAC identification of a near-optimal policy in any tabular episodic MDP. Additionally, we demonstrate that the sample complexity of the PEDEL algorithm of Wagenmaker and Jamieson (2022) closely approaches this lower bound. Considering the intractability of PEDEL, we formulate an open question regarding the possibility of achieving our lower bound using a computationally-efficient algorithm.
Fichier principal
Vignette du fichier
main.pdf (664.49 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04270888 , version 1 (05-11-2023)

Licence

Copyright (Tous droits réservés)

Identifiants

  • HAL Id : hal-04270888 , version 1

Citer

Aymen Al-Marjani, Andrea Tirinzoni, Emilie Kaufmann. Towards Instance-Optimality in Online PAC Reinforcement Learning. 2023. ⟨hal-04270888⟩
90 Consultations
76 Téléchargements

Partager

Gmail Facebook X LinkedIn More