Near Instance-Optimal PAC Reinforcement Learning for Deterministic MDPs - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2022

Near Instance-Optimal PAC Reinforcement Learning for Deterministic MDPs

Andrea Tirinzoni
  • Fonction : Auteur
  • PersonId : 1160571

Résumé

In probably approximately correct (PAC) reinforcement learning (RL), an agent is required to identify an ε-optimal policy with probability 1 − δ. While minimax optimal algorithms exist for this problem, its instance-dependent complexity remains elusive in episodic Markov decision processes (MDPs). In this paper, we propose the first nearly matching (up to a horizon squared factor and logarithmic terms) upper and lower bounds on the sample complexity of PAC RL in deterministic episodic MDPs with finite state and action spaces. In particular, our bounds feature a new notion of sub-optimality gap for state-action pairs that we call the deterministic return gap. While our instance-dependent lower bound is written as a linear program, our algorithms are very simple and do not require solving such an optimization problem during learning. Their design and analyses employ novel ideas, including graph-theoretical concepts (minimum flows) and a new maximum-coverage exploration strategy.
Fichier principal
Vignette du fichier
TAMK22.pdf (609.48 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03825101 , version 1 (21-10-2022)

Identifiants

Citer

Andrea Tirinzoni, Aymen Al-Marjani, Emilie Kaufmann. Near Instance-Optimal PAC Reinforcement Learning for Deterministic MDPs. NeurIPS 2022 - 36th Conference on Neural Information Processing System, Nov 2022, New Orleans, United States. ⟨hal-03825101⟩
35 Consultations
20 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More