Off-Policy Neural Fitted Actor-Critic - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2016

Off-Policy Neural Fitted Actor-Critic

Yann Boniface
  • Fonction : Auteur
  • PersonId : 835315
Alain Dutech

Résumé

A new off-policy, offline, model-free, actor-critic reinforcement learning algorithm dealing with continuous environments in both states and actions is presented. It addresses discrete time problems where the goal is to maximize the discounted sum of rewards using stationary policies. Our algorithm allows to trade-off between data-efficiency and scalability. The amount of a priori knowledge is kept low by: (1) using neural networks to learn both the critic and the actor, (2) not relying on initial trajectories provided by an expert, and (3) not depending on known goal states. Experimental results compare data-efficiency to 4 state-of-the-art algorithms on three benchmark environments. This article largely reproduces a previous work [34] by adding a higher dimensional environment, improving control architectures and provides batch normalization for others state-of-the-art algorithms.
Fichier principal
Vignette du fichier
main.pdf (1.1 Mo) Télécharger le fichier
Origine : Fichiers éditeurs autorisés sur une archive ouverte
Loading...

Dates et versions

hal-01413886 , version 1 (11-12-2016)

Identifiants

  • HAL Id : hal-01413886 , version 1

Citer

Matthieu Zimmer, Yann Boniface, Alain Dutech. Off-Policy Neural Fitted Actor-Critic. NIPS 2016 - Deep Reinforcement Learning Workshop, Dec 2016, Barcelona, Spain. ⟨hal-01413886⟩
220 Consultations
691 Téléchargements

Partager

Gmail Facebook X LinkedIn More