Actor-critic models of reinforcement learning in the basal ganglia: From natural to artificial rats
Résumé
Since 1995, numerous Actor–Critic architectures for reinforcement learning have been proposed as models of dopamine-like reinforcement learning mechanisms in the rat's basal ganglia. However, these models were usually tested in different tasks, and it is then difficult to compare their efficiency for an autonomous animat. We present here the comparison of four architectures in an animat as it per forms the same reward-seeking task. This will illustrate the consequences of different hypotheses about the management of different Actor sub-modules and Critic units, and their more or less autono mously determined coordination. We show that the classical method of coordination of modules by mixture of experts, depending on each module's performance, did not allow solving our task. Then we address the question of which principle should be applied efficiently to combine these units. Improve ments for Critic modeling and accuracy of Actor–Critic models for a natural task are finally discussed in the perspective of our Psikharpax project—an artificial rat having to survive autonomously in unpredictable environments.