AN ANALYSIS OF LINEAR COMPLEXITY ATTENTION SUBSTITUTES WITH BEST-RQ - Archive ouverte HAL
Communication Dans Un Congrès Année : 2024

AN ANALYSIS OF LINEAR COMPLEXITY ATTENTION SUBSTITUTES WITH BEST-RQ

Résumé

Self-Supervised Learning (SSL) has proven to be effective in various domains, including speech processing. However, SSL is computationally and memory expensive. This is in part due the quadratic complexity of multi-head self-attention (MHSA). Alternatives for MHSA have been proposed and used in the speech domain, but have yet to be investigated properly in an SSL setting. In this work, we study the effects of replacing MHSA with recent state-of-the-art alternatives that have linear complexity, namely, HyperMixing, Fastformer, SummaryMixing, and Mamba. We evaluate these methods by looking at the speed, the amount of VRAM consumed, and the performance on the SSL MP3S benchmark. Results show that these linear alternatives maintain competitive performance compared to MHSA while, on average, decreasing VRAM consumption by around 20% to 60% and increasing speed from 7% to 65% for input sequences ranging from 20 to 80 seconds.
Fichier principal
Vignette du fichier
2024_SLT_LinearSSL.pdf (1.59 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04732640 , version 1 (11-10-2024)

Identifiants

  • HAL Id : hal-04732640 , version 1

Citer

Ryan Whetten, Titouan Parcollet, Adel Moumen, Marco Dinarelli, Yannick Estève. AN ANALYSIS OF LINEAR COMPLEXITY ATTENTION SUBSTITUTES WITH BEST-RQ. Spoken Language Technoloby Workshop, Dec 2024, Macao, China. ⟨hal-04732640⟩
0 Consultations
0 Téléchargements

Partager

More