Low-Latency Human-Computer Auditory Interface Based on Real-Time Vision Analysis - Laboratoire d'Etude de l'Apprentissage et du Développement (LEAD) Accéder directement au contenu
Communication Dans Un Congrès Année : 2022

Low-Latency Human-Computer Auditory Interface Based on Real-Time Vision Analysis

Résumé

This paper proposes a visuo-auditory substitution method to assist visually impaired people in scene understanding. Our approach focuses on person localisation in the user's vicinity in order to ease urban walking. Since a real-time and low-latency is required in this context for user's security, we propose an embedded system. The processing is based on a lightweight convolutional neural network to perform an efficient 2D person localisation. This measurement is enhanced with the corresponding person depth information, and is then transcribed into a stereophonic signal via a head-related transfer function. A GPU-based implementation is presented that enables a real-time processing to be reached at 23 frames/s on a 640x480 video stream. We show with an experiment that this method allows for a real-time accurate audio-based localization.
Fichier principal
Vignette du fichier
ICASSP2022.pdf (369.75 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03796641 , version 1 (04-10-2022)

Identifiants

Citer

Florian Scalvini, Camille Bordeau, Maxime Ambard, Cyrille Migniot, Julien Dubois. Low-Latency Human-Computer Auditory Interface Based on Real-Time Vision Analysis. ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), May 2022, Singapore, France. pp.36-40, ⟨10.1109/ICASSP43922.2022.9747094⟩. ⟨hal-03796641⟩
15 Consultations
81 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More