Analysis of a Target-Based Actor-Critic Algorithm with Linear Function Approximation - Archive ouverte HAL
Communication Dans Un Congrès Année : 2022

Analysis of a Target-Based Actor-Critic Algorithm with Linear Function Approximation

Résumé

Actor-critic methods integrating target networks have exhibited a stupendous empirical success in deep reinforcement learning. However, a theoretical understanding of the use of target networks in actor-critic methods is largely missing in the literature. In this paper, we reduce this gap between theory and practice by proposing the first theoretical analysis of an online target-based actor-critic algorithm with linear function approximation in the discounted reward setting. Our algorithm uses three different timescales: one for the actor and two for the critic. Instead of using the standard single timescale temporal difference (TD) learning algorithm as a critic, we use a two timescales target-based version of TD learning closely inspired from practical actor-critic algorithms implementing target networks. First, we establish asymptotic convergence results for both the critic and the actor under Markovian sampling. Then, we provide a finite-time analysis showing the impact of incorporating a target network into actor-critic methods.
Fichier principal
Vignette du fichier
barakat22a.pdf (649.57 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03860881 , version 1 (18-11-2022)

Identifiants

  • HAL Id : hal-03860881 , version 1

Citer

Anas Barakat, Pascal Bianchi, Julien Lehmann. Analysis of a Target-Based Actor-Critic Algorithm with Linear Function Approximation. 25th International Conference on Artificial Intelligence and Statistics, Mar 2022, Virtual, Unknown Region. ⟨hal-03860881⟩
34 Consultations
59 Téléchargements

Partager

More