Actor-Critic learning for mean-field control in continuous time - Archive ouverte HAL Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2023

Actor-Critic learning for mean-field control in continuous time

Résumé

We study policy gradient for mean-field control in continuous time in a reinforcement learning setting. By considering randomised policies with entropy regularisation, we derive a gradient expectation representation of the value function, which is amenable to actor-critic type algorithms, where the value functions and the policies are learnt alternately based on observation samples of the state and model-free estimation of the population state distribution, either by offline or online learning. In the linear-quadratic mean-field framework, we obtain an exact parametrisation of the actor and critic functions defined on the Wasserstein space. Finally, we illustrate the results of our algorithms with some numerical experiments on concrete examples.
Fichier principal
Vignette du fichier
Algo-PGMFRL-Hal.pdf (1.79 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04025524 , version 1 (12-03-2023)

Identifiants

Citer

Noufel Frikha, Maximilien Germain, Mathieu Laurière, Huyên Pham, Xuanye Song. Actor-Critic learning for mean-field control in continuous time. 2023. ⟨hal-04025524⟩
86 Consultations
40 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More