Combining artificial curiosity and tutor guidance for environment exploration
Résumé
— In a new environment, an artificial agent should explore autonomously and exploit tutoring signals from human caregivers. While these two mechanisms have mainly been studied in isolation, we show in this paper that a carefully designed combination of both performs better than each separately. To this end, we propose an autonomous agent whose actions result from a user-defined weighted combination of two drives: a tendency for gaze-following behaviors in presence of a tutor, and a novelty-based intrinsic curiosity. They are both incorporated in a model-based reinforcement learning framework through reward shaping. The agent is evaluated on a discretized pick-and-place task in order to explore the effects of various combinations of both drives. Results show how a properly tuned combination leads to a faster and more consistent discovery of the task than using each drive in isolation. Additionally, experiments in a reward-free version of the environment indicate that combining curiosity and gaze-following behaviors is a promising path for real-life exploration in artificial agents.
Origine | Fichiers produits par l'(les) auteur(s) |
---|
Loading...