RL-IAC: An Exploration Policy for Online Saliency Learning on an Autonomous Mobile Robot - Archive ouverte HAL
Communication Dans Un Congrès Année : 2016

RL-IAC: An Exploration Policy for Online Saliency Learning on an Autonomous Mobile Robot

Résumé

In the context of visual object search and localization, saliency maps provide an efficient way to find object candidates in images. Unlike most approaches, we propose a way to learn saliency maps directly on a robot, by exploring the environment, discovering salient objects using geometric cues, and learning their visual aspects. More importantly, we provide an autonomous exploration strategy able to drive the robot for the task of learning saliency. For that, we describe the Reinforcement Learning-Intelligent Adaptive Curiosity algorithm (RL-IAC), a mechanism based on IAC (Intelligent Adaptive Curiosity) able to guide the robot through areas of the space where learning progress is high, while minimizing the time spent to move in its environment without learning. We demonstrate first that our saliency approach is an efficient tool to generate relevant object boxes proposal in the input image and significantly outperforms state-of-the-art algorithms. Second, we show that RL-IAC can drastically decrease the required time for learning saliency compared to random exploration.
Fichier principal
Vignette du fichier
IROS2016_accepted.pdf (1.2 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-01392947 , version 1 (05-11-2016)
hal-01392947 , version 2 (16-11-2016)

Identifiants

  • HAL Id : hal-01392947 , version 1

Citer

Céline Craye, David Filliat, Jean-François Goudou. RL-IAC: An Exploration Policy for Online Saliency Learning on an Autonomous Mobile Robot. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Oct 2016, Daejeon, South Korea. ⟨hal-01392947v1⟩
234 Consultations
356 Téléchargements

Partager

More