DEEP-HEAR: a multimodal subtitle positioning system dedicated to deaf and hearing-impaired people
Résumé
In this paper, we introduce the DEEP-HEAR framework, a multimodal dynamic subtitle positioning system designed to increase the accessibility of deaf and hearing impaired people (HIP) to multimedia documents. The proposed system exploits both computer vision algorithms and deep convolutional neural networks specifically designed and tuned in order to detect and recognize the identity of the active speaker. The main contributions of the paper concern: a novel method dedicated to recognizing various characters existent in the video stream. A video temporal segmentation algorithm that divides the video sequence into semantic units, based on face tracks and visual consistency. Finally, the core of our approach concerns a novel active speaker recognition method relying on the multimodal information fusion from the text, audio, and video streams. The experimental results carried out on a large scale dataset of more than 30 videos, validate the proposed methodology with average accuracy and recognition rates superior to 90%. Moreover, the method shows robustness to important object/camera motion and face pose variation, yielding gains of more than 8% in precision and recall rates when compared with state-of-the-art techniques. The subjective evaluation of the proposed dynamic subtitle positioning system demonstrates the effectiveness of our approach.