Audio, visual, and audio-visual egocentric distance perception in virtual environments
Résumé
Previous studies have shown that in real environments, distances are visually correctly estimated. In visual (V) virtual environments (VEs), distances are systematically underestimated. In audio (A) real and virtual environments, near distances (< 2m) are overestimated whereas far distances (>2 m) are underestimated. However, little is known regarding combined A and V interactions on the egocentric distance perception in VEs. In this paper we present a study of A, V, and AV egocentric distance perception in VEs. AV rendering is provided via the SMART-I2 platform using tracked passive visual stereoscopy and acoustical wave field synthesis (WFS). Distances are estimated using triangulated blind walking under A, V, and AV conditions. Distance compressions similar to those found in previous studies are observed under each rendering condition. The audio and visual modalities appears to be of similar precision for distance estimations in virtual environments. This casts doubts on the commonly accepted visual capture theory in distance perception.
Origine | Fichiers produits par l'(les) auteur(s) |
---|
Loading...