Modulating Multi-Modal Integration in a Robot Forward Model for Sensory Enhancement and Self-Perception
Résumé
This work investigates how different modalities (i.e., visual, proprioceptive, motor) can be optimally integrated by a humanoid robot during a visual prediction task. A multi-modal forward model inspired on a work from different authors (Shim and colleagues, [31]) is adopted for generating visual predictions, given the motor activity and the context the robot is situated in. We extend the application of this tool by exploiting its optimal integration and predictive capabilities in sensory attenuation processes. According to the predictive brain hypothesis, our brains make sense of the world by anticipating sensory input and by enhancing or, on the contrary, filtering out information according to our expectations, motivations, desires and current contexts and tasks. We develop a series of robotic studies in which we focus on the role of sensory attenuation processes in cognitive development. In particular, we show how attenuating predicted visual information may enhance the perceptual capabilities of a humanoid robot in an object detection task. Moreover, we analyse the dynamics of the model prediction and its prediction error during the robot movements. In line with similar studies, our experiments indicate the mismatch between visual predictions and observations as a computational candidate for the study of self-perception and self-other distinction in artificial systems. Finally, the capability of the model to re-modulate its multimodal integration weights under dynamical environmental conditions is tested. This work analyses the dynamic modulation of multi-modal integration, proposing this to be also an essential prerequisite for the development of subjective experience in artificial systems.
Origine | Fichiers produits par l'(les) auteur(s) |
---|---|
Licence |