MetaCURL: Non-stationary Concave Utility Reinforcement Learning - Archive ouverte HAL Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2024

MetaCURL: Non-stationary Concave Utility Reinforcement Learning

Résumé

We explore online learning in episodic loop-free Markov decision processes on non-stationary environments (changing losses and probability transitions). Our focus is on the Concave Utility Reinforcement Learning problem (CURL), an extension of classical RL for handling convex performance criteria in state-action distributions induced by agent policies. While various machine learning problems can be written as CURL, its non-linearity invalidates traditional Bellman equations. Despite recent solutions to classical CURL, none address non-stationary MDPs. This paper introduces MetaCURL, the first CURL algorithm for non-stationary MDPs. It employs a meta-algorithm running multiple black-box algorithms instances over different intervals, aggregating outputs via a sleeping expert framework. The key hurdle is partial information due to MDP uncertainty. Under partial information on the probability transitions (uncertainty and non-stationarity coming only from external noise, independent of agent state-action pairs), we achieve optimal dynamic regret without prior knowledge of MDP changes. Unlike approaches for RL, MetaCURL handles full adversarial losses, not just stochastic ones. We believe our approach for managing non-stationarity with experts can be of interest to the RL community.
Fichier principal
Vignette du fichier
paper.pdf (367.92 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04591366 , version 1 (29-05-2024)

Licence

Identifiants

  • HAL Id : hal-04591366 , version 1

Citer

Bianca Marin Moreno, Margaux Brégère, Pierre Gaillard, Nadia Oudjane. MetaCURL: Non-stationary Concave Utility Reinforcement Learning. 2024. ⟨hal-04591366⟩
19 Consultations
3 Téléchargements

Partager

Gmail Mastodon Facebook X LinkedIn More