Reinforcement Learning in the Wild with Maximum Likelihood-based Model Transfer - Archive ouverte HAL Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2023

Reinforcement Learning in the Wild with Maximum Likelihood-based Model Transfer

Résumé

In this paper, we study the problem of transferring the available Markov Decision Process (MDP) models to learn and plan efficiently in an unknown but similar MDP. We refer to it as \textit{Model Transfer Reinforcement Learning (MTRL)} problem. First, we formulate MTRL for discrete MDPs and Linear Quadratic Regulators (LQRs) with continuous state actions. Then, we propose a generic two-stage algorithm, MLEMTRL, to address the MTRL problem in discrete and continuous settings. In the first stage, MLEMTRL uses a \textit{constrained Maximum Likelihood Estimation (MLE)}-based approach to estimate the target MDP model using a set of known MDP models. In the second stage, using the estimated target MDP model, MLEMTRL deploys a model-based planning algorithm appropriate for the MDP class. Theoretically, we prove worst-case regret bounds for MLEMTRL both in realisable and non-realisable settings. We empirically demonstrate that MLEMTRL allows faster learning in new MDPs than learning from scratch and achieves near-optimal performance depending on the similarity of the available MDPs and the target MDP.

Dates et versions

hal-04260795 , version 1 (26-10-2023)

Licence

Paternité - Pas d'utilisation commerciale

Identifiants

Citer

Hannes Eriksson, Debabrota Basu, Tommy Tram, Mina Alibeigi, Christos Dimitrakakis. Reinforcement Learning in the Wild with Maximum Likelihood-based Model Transfer. 2023. ⟨hal-04260795⟩
22 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More