Combining self-organizing maps with mixtures of experts: Application to an Actor-critic model of reinforcement learning in the Basal Ganglia - Archive ouverte HAL
Communication Dans Un Congrès Année : 2006

Combining self-organizing maps with mixtures of experts: Application to an Actor-critic model of reinforcement learning in the Basal Ganglia

Louis-Emmanuel Martinet
  • Fonction : Auteur
  • PersonId : 888438
Agnés Guillot
  • Fonction : Auteur
  • PersonId : 854761

Résumé

In a reward-seeking task performed in a continuous environment, our previous work compared several Actor-Critic architectures implementing dopamine-like reinforcement learning mechanisms in the rat's basal ganglia. The task complexity imposes the coordination of several submodules, each module being an expert trained in a particular subset of the task. Our results illustrated the consequences of different hypotheses about the management of Actor-Critic submodules. We showed that the classical method where the choice of the expert to train at a given time depends on each expert's performance suffered from strong limitations. We rather proposed to cluster the continuous state space by an ad hoc method that lacked autonomy and generalization abilities. In the present work we have combined the mixture of experts with self-organizing maps in order to cluster autonomously the state space. On the one hand, we find that classical Kohonen maps give very variable results: some task decompositions provide very good and stable reinforcement learning performances, whereas some others are unadapted to the task. Moreover, they require the number of experts to be set a priori. On the other hand, algorithms like Growing Neural Gas or Growing When Required have the property to choose autonomously and incrementally the number of experts to train. They lead to good performances, even if they are still weaker than our hand-tuned task decomposition and than the best Kohonen maps that we got. We finally discuss on propositions about what information to add to these algorithms in order to make the task decomposition appropriate to the reinforcement learning process. For example, information about the current behavior of the robot could help adapt the boundaries between two experts to regions of the state space where the model needs to learn to switch between Actor-Critic modules.
Fichier principal
Vignette du fichier
KMG_SAB06.pdf (209.87 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-00688933 , version 1 (18-04-2012)

Identifiants

Citer

Mehdi Khamassi, Louis-Emmanuel Martinet, Agnés Guillot. Combining self-organizing maps with mixtures of experts: Application to an Actor-critic model of reinforcement learning in the Basal Ganglia. SAB 2006 - 9th International Conference on the Simulation of Adaptive Behavior, Sep 2006, Rome, Italy. pp.394-405, ⟨10.1007/11840541_33⟩. ⟨hal-00688933⟩
219 Consultations
376 Téléchargements

Altmetric

Partager

More