Reward function and initial values : Better choices for accelerated Goal-directed reinforcement learning. - Archive ouverte HAL Accéder directement au contenu
Article Dans Une Revue Lecture Notes in Computer Science Année : 2006

Reward function and initial values : Better choices for accelerated Goal-directed reinforcement learning.

Résumé

An important issue in Reinforcement Learning (RL) is to accelerate or improve the learning process. In this paper, we study the influence of some RL parameters over the learning speed. Indeed, although RL convergence properties have been widely studied, no precise rules exist to correctly choose the reward function and initial Q-values. Our method helps the choice of these RL parameters within the context of reaching a goal in a minimal time. We develop a theoretical study and also provide experimental justifications for choosing on the one hand the reward function, and on the other hand particular initial Q-values based on a goal bias function.
Fichier principal
Vignette du fichier
matignon2006ann.pdf (541.67 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-00331752 , version 1 (17-10-2008)
hal-00331752 , version 2 (15-12-2009)

Identifiants

  • HAL Id : hal-00331752 , version 2

Citer

Laëtitia Matignon, Guillaume J. Laurent, Nadine Le Fort - Piat. Reward function and initial values : Better choices for accelerated Goal-directed reinforcement learning.. Lecture Notes in Computer Science, 2006, 1 (4131), pp.840-849. ⟨hal-00331752v2⟩
721 Consultations
11867 Téléchargements

Partager

Gmail Facebook X LinkedIn More