How do human newborns come to understand the multimodal environment?
Résumé
For a long time, newborns were considered as human beings devoid of perceptual abilities who had to learn with effort everything about their physical and social environment. Extensive empirical evidence gathered in the last decades has systematically invalidated this notion. Despite the relatively immature state of their sensory modalities, newborns have perceptions that are acquired, and are triggered by, their contact with the environment. More recently, the study of the fetal origins of the sensory modes has revealed that in utero all the senses prepare to operate, except for the vision mode, which is only functional starting from the first minutes after birth. This discrepancy between the maturation of the different senses leads to the question of how human newborns come to understand our multimodal and complex environment. More precisely, how the visual mode interacts with the tactile and auditory modes from birth. After having defined the tools that newborns use to interact with other sensory modalities, we review studies across different fields of research such as the intermodal transfer between touch and vision, auditory-visual speech perception, and the existence of links between the dimensions of space, time, and number. Overall, evidence from these studies supports the idea that human newborns are spontaneously driven, and cognitively equipped, to link information collected by the different sensory modes in order to create a representation of a stable world.