A Top-Down and Bottom-Up Visual Attention Model for Humanoid Object Approaching and Obstacle Avoidance
Résumé
Most of the research on humanoid walk tasks has considered a global representation of the scene that frequently relies on external sensors. This is detrimental to the autonomy and the reactivity of the agent under unknown or changing scenarios. Ego-centric localization has been less explored, and the works considering on-board acquisitions have mostly dealt with tasks under controlled scenarios where the path to the object is cleared from obstacles. In this work a behavior-based control scheme is proposed, so the robot Nao can approach and position in relation to a given face of an object, while avoiding obstacles. For this, the solution relies on top-down (color-based) and bottom-up (optic-flow-based) visual features, and proprioceptive information registered on-board. The model is decentralized and exploits the emergent aspect of behavior from the independent contribution of a walk and a look-at task. An embodied visual encoding approach is proposed to support the arbitration between competing behavioral modes.