Spike-SLR: An Energy-efficient Parallel Spiking Transformer for Event-based Sign Language Recognition
Résumé
Event-based cameras are suitable for sign language recognition (SLR) by providing movement perception with highly dynamic range, high temporal resolution, high power efficiency and low latency. Spike Neural Networks (SNNs) are naturally suited to deal with the asynchronous and sparse data from the event cameras due to their spike-based event-driven paradigm, with less power consumption compared to artificial neural networks. In this paper, we introduce spiking transformer into event-based SLR by proposing a model named Spike-SLR, which includes two novel blocks: a spike soft-attention block, which enables model to focus on regions with high spike rates, reducing the impact of noise to improve the accuracy and a parallel spike transformer block with simplified spiking self-attention mechanism, increasing computational efficiency. On SL-Animals-DVS-4sets and SL-Animals-DVS-3sets, Spike-SLR achieves the accuracy of 89.47% and 90.06%, outperforming the state-of-the-art (SOTA) model by 1.35% and 2.61%, respectively. Besides, Spike-SLR only need 0.03mJ to process a sequence of event frames, achieving a 99.27% reduction in power consumption compared to the SOTA model.
Fichier principal
Spike_SLR_An Energy_efficient_Parallel_Spiking_Transformer_for_Event_based_Sign_Language_Recognition.pdf (1.92 Mo)
Télécharger le fichier
Origine | Fichiers produits par l'(les) auteur(s) |
---|