Training dynamic models using early exits for automatic speech recognition on resource-constrained devices - Archive ouverte HAL
Pré-Publication, Document De Travail Année : 2023

Training dynamic models using early exits for automatic speech recognition on resource-constrained devices

Résumé

The possibility of dynamically modifying the computational load of neural models at inference time is crucial for on-device processing, where computational power is limited and time-varying. Established approaches for neural model compression exist, but they provide architecturally static models. In this paper, we investigate the use of early-exit architectures, that rely on intermediate exit branches, applied to large-vocabulary speech recognition. This allows for the development of dynamic models that adjust their computational cost to the available resources and recognition performance. Unlike previous works, besides using pre-trained backbones we also train the model from scratch with an early-exit architecture. Experiments on public datasets show that early-exit architectures from scratch not only preserve performance levels when using fewer encoder layers, but also improve task accuracy as compared to using single-exit models or using pre-trained models. Additionally, we investigate an exit selection strategy based on posterior probabilities as an alternative to frame-based entropy.
Fichier principal
Vignette du fichier
2309.09546.pdf (4.17 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04216190 , version 1 (23-09-2023)

Identifiants

Citer

George August Wright, Umberto Cappellazzo, Salah Zaiem, Desh Raj, Lucas Ondel Yang, et al.. Training dynamic models using early exits for automatic speech recognition on resource-constrained devices. 2023. ⟨hal-04216190⟩
64 Consultations
92 Téléchargements

Altmetric

Partager

More