Continual State Representation Learning for Reinforcement Learning using Generative Replay - Archive ouverte HAL Access content directly
Conference Papers Year : 2018

Continual State Representation Learning for Reinforcement Learning using Generative Replay

Abstract

We consider the problem of building a state representation model in a continual fashion. As the environment changes, the aim is to efficiently compress the sensory state's information without losing past knowledge. The learned features are then fed to a Reinforcement Learning algorithm to learn a policy. We propose to use Variational Auto-Encoders for state representation, and Generative Replay, i.e. the use of generated samples, to maintain past knowledge. We also provide a general and statistically sound method for automatic environment change detection. Our method provides efficient state representation as well as forward transfer, and avoids catastrophic forgetting. The resulting model is capable of incrementally learning information without using past data and with a bounded system size.
Fichier principal
Vignette du fichier
_NIPS_CL_Workshop__Continual_State_Representation_Learning_for_Reinforcement_Learning (2).pdf (1.6 Mo) Télécharger le fichier
Origin : Files produced by the author(s)
Loading...

Dates and versions

hal-01951399 , version 1 (11-12-2018)

Identifiers

  • HAL Id : hal-01951399 , version 1

Cite

Hugo Caselles-Dupré, Michael Garcia-Ortiz, David Filliat. Continual State Representation Learning for Reinforcement Learning using Generative Replay. Workshop on Continual Learning, NeurIPS 2018 - Thirty-second Conference on Neural Information Processing Systems, Dec 2018, Montréal, Canada. ⟨hal-01951399⟩
116 View
101 Download

Share

Gmail Facebook X LinkedIn More