Dynamic subtitles: a Multimodal video accessibility enhancement dedicated to deaf and hearing impaired users - Archive ouverte HAL
Communication Dans Un Congrès Année : 2020

Dynamic subtitles: a Multimodal video accessibility enhancement dedicated to deaf and hearing impaired users

Résumé

In this paper, we introduce a novel dynamic subtitle positioning system designed to increase the accessibility of the deaf and hearing impaired people to video documents. Our framework places the subtitle in the near vicinity of the active speaker in order to allow the viewer to follow the visual content while regarding the textual information. The proposed system is based on a multimodal fusion of text, audio and visual information in order to detect and recognize the identity of the active speaker. The experimental evaluation, performed on a large dataset of more than 30 videos, validates the methodology with average accuracy and recognition rates superior to 92%. The subjective evaluation demonstrates the effectiveness of our approach outperforming both conventional (static) subtitling and other state of the art techniques in terms of enhancement of the overall viewing experience and eyestrain reduction.
Fichier non déposé

Dates et versions

hal-04305389 , version 1 (24-11-2023)

Identifiants

Citer

Bogdan Mocanu, Ruxandra Tapu, Titus Zaharia. Dynamic subtitles: a Multimodal video accessibility enhancement dedicated to deaf and hearing impaired users. 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), Oct 2019, Seoul, South Korea. pp.2558-2566, ⟨10.1109/ICCVW.2019.00313⟩. ⟨hal-04305389⟩
25 Consultations
0 Téléchargements

Altmetric

Partager

More