Tirl: enriching actor-critic RL with non-expert human teachers and a trust model - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2020

Tirl: enriching actor-critic RL with non-expert human teachers and a trust model

Résumé

Reinforcement learning (RL) algorithms have been demonstrated to be very attractive tools to train agents to achieve sequential tasks. However, these algorithms require too many training data to converge to be efficiently applied to physical robots. By using a human teacher, the learning process can be made faster and more robust, but the overall performance heavily depends on the quality and availability of teacher demonstrations or instructions. In particular, when these teaching signals are inadequate, the agent may fail to learn an optimal policy. In this paper, we introduce a trustbased interactive task learning approach. We propose an RL architecture able to learn both from environment rewards and from various sparse teaching signals provided by non-expert teachers, using an actor-critic agent, a human model and a trust model. We evaluate the performance of this architecture on 4 different setups using a maze environment with different simulated teachers and show that the benefits of the trust model.
Fichier principal
Vignette du fichier
ROMAN2020_Final-2.pdf (2.83 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03124262 , version 1 (25-02-2021)

Identifiants

  • HAL Id : hal-03124262 , version 1

Citer

Felix Rutard, Olivier Sigaud, Mohamed Chetouani. Tirl: enriching actor-critic RL with non-expert human teachers and a trust model. The 29th IEEE International Conference on Robot and Human Interactive Communication, RO-MAN, 2020, Napoli, Italy. ⟨hal-03124262⟩
66 Consultations
116 Téléchargements

Partager

Gmail Facebook X LinkedIn More