Bootstrapping Q-Learning for Robotics from Neuro-Evolution Results - Archive ouverte HAL Accéder directement au contenu
Article Dans Une Revue IEEE Transactions on Cognitive and Developmental Systems Année : 2017

Bootstrapping Q-Learning for Robotics from Neuro-Evolution Results

Résumé

Reinforcement learning problems are hard to solve in a robotics context as classical algorithms rely on discrete representations of actions and states, but in robotics both are continuous. A discrete set of actions and states can be defined, but it requires an expertise that may not be available, in particular in open environments. It is proposed to define a process to make a robot build its own representation for a reinforcement learning algorithm. The principle is to first use a direct policy search in the sensori-motor space, i.e. with no predefined discrete sets of states nor actions, and then extract from the corresponding learning traces discrete actions and identify the relevant dimensions of the state to estimate the value function. Once this is done, the robot can apply reinforcement learning (1) to be more robust to new domains and, if required, (2) to learn faster than a direct policy search. This approach allows to take the best of both worlds: first learning in a continuous space to avoid the need of a specific representation, but at a price of a long learning process and a poor generalization, and then learning with an adapted representation to be faster and more robust.
Fichier principal
Vignette du fichier
article.pdf (2.38 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-01494744 , version 1 (23-03-2017)

Identifiants

Citer

Matthieu Zimmer, Stephane Doncieux. Bootstrapping Q-Learning for Robotics from Neuro-Evolution Results. IEEE Transactions on Cognitive and Developmental Systems, 2017, ⟨10.1109/TCDS.2016.2628817⟩. ⟨hal-01494744⟩
317 Consultations
452 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More