End-to-end Neuromorphic Lip Reading - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2023

End-to-end Neuromorphic Lip Reading

Résumé

Human speech perception is intrinsically a multi-modal task since speech production requires the speaker to move the lips, producing visual cues in addition to auditory information. Lip reading consists in visually interpreting the movements of the lips to understand speech, without the use of sound. It is an important task since it can either complement an audio-based speech recognition system or replace it when sound is not available. We introduce in this paper a neuromorphic model for lip reading, that uses events produced by an event-based sensor capturing lips motion as input, and that classifies short event sequences in word categories based on a SNN architecture. Experimental results show that the proposed model successfully leverages various advantages of neuromorphic approaches such as energy efficiency and low latency, which are central features in real-time embedded scenarios. To the best of our knowledge, it is the first proposal of an end-to-end neuromorphic lip reading model.
Fichier principal
Vignette du fichier
CVPR_2023_SNNLipReading.pdf (388.37 Ko) Télécharger le fichier
Lip-Reading SNN Model.pdf (5.03 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04183135 , version 1 (18-08-2023)

Identifiants

Citer

Hugo Bulzomi, Marcel Schweiker, Amélie Gruel, Jean Martinet. End-to-end Neuromorphic Lip Reading. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Jun 2023, Vancouver, Canada. pp.4100-4107, ⟨10.1109/CVPRW59228.2023.00431⟩. ⟨hal-04183135⟩
25 Consultations
33 Téléchargements

Altmetric

Partager

Gmail Mastodon Facebook X LinkedIn More