Priming by the variability of visual information.
Résumé
According to recent theories, perception relies on summary representations that encode statistical information about the sensory environment. Here, we used perceptual priming to characterize the representations that mediate categorization of a complex visual array. Observers judged the average shape or color of a target visual array that was preceded by an irrelevant prime array. Manipulating the variability of task-relevant and task-irrelevant feature information in the prime and target orthogonally, we found that observers were faster to respond when the variability of feature information in the prime and target arrays matched. Critically, this effect occurred irrespective of whether the element-by-element features in the prime and target array overlapped or not, and was even present when prime and target features were drawn from opposing categories. This "priming by variance" phenomenon occurred with prime-target intervals as short as 100 ms. Further experiments showed that this effect did not depend on resource allocation, and occurred even when prime and target did not share the same spatial location. These results suggest that human observers adapt to the variability of visual information, and provide evidence for the existence of a low-level mechanism by which the range or dispersion of visual information is rapidly extracted. This information may in turn help to set the gain of neuronal processing during perceptual choice.