Seeing without sight : an automatic cognition system dedicated to blind and visually impaired people - Archive ouverte HAL
Communication Dans Un Congrès Année : 2017

Seeing without sight : an automatic cognition system dedicated to blind and visually impaired people

Résumé

In this paper we present an automatic cognition system, based on computer vision algorithms and deep convolutional neural networks, designed to assist the visually impaired (VI) users during navigation in highly dynamic urban scenes. A first feature concerns the realtime detection of various types of objects existent in the outdoor environment relevant from the perspective of a VI person. The objects are followed between successive frames using a novel tracker, which exploits an offline trained neural-network and is able to track generic objects using motion patterns and visual attention models. The system is able to handle occlusions, sudden camera/object movements, rotation or various complex changes. Finally, an object classification module is proposed that exploits the YOLO algorithm and extends it with new categories specific to assistive devices applications. The feedback to VI users is transmitted as a set of acoustic warning messages through bone conducting headphones. The experimental evaluation, performed on the VOT 2016 dataset and on a set of videos acquired with the help of VI users, demonstrates the effectiveness and efficiency of the proposed method
Fichier non déposé

Dates et versions

hal-01687512 , version 1 (18-01-2018)

Identifiants

  • HAL Id : hal-01687512 , version 1

Citer

Bogdan Mocanu, Ruxandra Tapu, Titus Zaharia. Seeing without sight : an automatic cognition system dedicated to blind and visually impaired people. ICCV 2017 : International Conference on Computer Vision, Oct 2017, Venise, Italy. pp.1452 - 1459. ⟨hal-01687512⟩
76 Consultations
0 Téléchargements

Partager

More