Navigating from a depth image converted into sound
Abstract
BACKGROUND: Common-manufactured depth sensors generate depth images humans normally obtain from their eyes and hands. Various designs converting spatial data into sound have recently been proposed, speculating on their applicability as Sensory Substitution Devices (SSD).OBJECTIVE: The aim of the present study was to test such a design as a travel aid in a navigation task in real mazes. METHODS: Our portable device (MeloSee) converted the 2-D array of a depth image into melody in real-time. Distance from the sensor was translated into sound intensity, stereo-modulated laterally, and pitch represented verticality. Twenty one blindfolded young adults each navigated along four different paths during two sessions separated by a one-week interval. In some instances, they had to deal with a dual task, i.e., to recognize a temporal pattern applied through a tactile vibrator while they navigated.RESULTS: All participants learnt how to use the system on both new paths and on those they had already navigated from. Based on travel time and errors, performance also improved from one week to the next. The dual task was achieved successfully, slightly affecting but not preventing effective navigation. CONCLUSIONS: The use of Kinect®-type sensors to implement SSDs is promising but it is restricted to indoor use and is inefficient at too short a range.
Origin | Publisher files allowed on an open archive |
---|
Loading...