Enhancing the accessibility of hearing impaired to video content through fully automatic dynamic captioning
Résumé
In this paper we introduce an automatic subtitle positioning approach designed to enhance the video accessibility of deaf and hearing impaired people to multimedia documents. By using a dynamic subtitle and captioning approach, which exploits various computer vision techniques, including face detection, tracking and recognition, video temporal segmentation into shots and scenes and active speaker recognition, we are able to position each video subtitle segment in the near vicinity of the active speaker. The experimental evaluation performed on 30 video elements validates our approach with average F1-scores superior to 92%.