Prototyping and Evaluating Sensory Substitution Devices by Spatial Immersion in Virtual Environments
Abstract
Various audio-vision Sensory Substitution Devices (SSDs) are in development to assist people without sight. They all convert optical information extracted from a camera, into sound parameters but are evaluated for different tasks in different contexts. The use of 3D environments is proposed here to compare the advantages and disadvantages of not only software (transcoding) solutions but also of hardware (component) specifics, in various situations and activities. By use of a motion capture system, the whole person, not just a guided avatar, was immersed in virtual places that were modelled and that could be replicated at will. We evaluated the ability to hear depth for various tasks: detecting and locating an open window, moving and crossing an open door. Participants directed the modelled depth-camera with a real pointing device that was either held in the hand or fastened on the head. Mixed effects on response delays were analyzed with a linear model to highlight the respective importance of the pointing device, the target specifics and the individual participants. Results are encouraging to further exploit our prototyping set-up and test many solutions by implementing e.g., environments, sensor devices, transcoding rules, and pointing devices including the use of an eye-tracker.
Origin | Publisher files allowed on an open archive |
---|
Loading...