Gradient Ascent Activity-based Credit Assignment with History-dependent Reward
Résumé
In reinforcement learning, credit assignment with historydependent reward is a key problem to solve for being able to model agents: (i) associating the returns from their environment with their past (series of) actions, and (ii) figuring out which past decisions are responsible for the current achievement of their goal. Usual approaches simplify this problem by assuming an immediate reward for each action. Our first result is to propose a general and formal framework in which the credits assigned to actions are updated based on a gradient of expected rewards from past actions. This framework is able to model complex tasks that require fulfilling sub-tasks in order, each sub-task consisting of a specific sequence of actions. Our second result is to propose an algorithm using the activity of actions to increase (resp. decrease) the credits of necessary (resp. unnecessary) past actions. We illustrate our algorithm on a task inspired by a behavioral learning task of rodents in a maze.
Domaines
Informatique [cs]Origine | Fichiers produits par l'(les) auteur(s) |
---|