Grounding Humanoid Visually Guided Walking: From Action-independent to Action-oriented Knowledge
Résumé
In the context of humanoid and service robotics, it is essential that the agent can be positioned with respect to objects of interest in the environment. By relying mostly on the cognitivist conception in artificial intelligence, the research on visually guided walking has tended to overlook the characteristics of the context in which behavior occurs. Consequently, considerable efforts have been directed to define action-independent explicit models of the solution, often resulting in high computational requirements. In this study, inspired by the embodied cognition research, our interest has focused on the analysis of the sensory-motor coupling. Notably, on the relation between embodiment, information, and action-oriented representation. Hence, by mimicking human walking, a behavior scheme is proposed and endowed the agent with the skill of approaching stimuli. A significant contribution to object discrimination was obtained, by exploiting the redundancies and the statistical regularities induced in the sensory-motor coordination, thus salience is anticipated from the fusion of visual and proprioceptive information in a Bayesian network. The solution was implemented on the humanoid platform Nao, where the task was accomplished in an unstructured scenario.