Adapting the adapters for code-switching in multilingual ASR - Archive ouverte HAL
Pré-Publication, Document De Travail Année : 2023

Adapting the adapters for code-switching in multilingual ASR

Atharva Kulkarni
  • Fonction : Auteur
Ajinkya Kulkarni
  • Fonction : Auteur
  • PersonId : 1267700
Hanan Aldarmaki
  • Fonction : Auteur

Résumé

Recently, large pre-trained multilingual speech models have shown potential in scaling Automatic Speech Recognition (ASR) to many low-resource languages. Some of these models employ language adapters in their formulation, which helps to improve monolingual performance and avoids some of the drawbacks of multi-lingual modeling on resource-rich languages. However, this formulation restricts the usability of these models on code-switched speech, where two languages are mixed together in the same utterance. In this work, we propose ways to effectively fine-tune such models on code-switched speech, by assimilating information from both language adapters at each language adaptation point in the network. We also model code-switching as a sequence of latent binary sequences that can be used to guide the flow of information from each language adapter at the frame level. The proposed approaches are evaluated on three code-switched datasets encompassing Arabic, Mandarin, and Hindi languages paired with English, showing consistent improvements in code-switching performance with at least 10\% absolute reduction in CER across all test sets.
Fichier principal
Vignette du fichier
2310.07423.pdf (307.18 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
Licence

Dates et versions

hal-04238811 , version 1 (12-10-2023)

Licence

Identifiants

Citer

Atharva Kulkarni, Ajinkya Kulkarni, Miguel Couceiro, Hanan Aldarmaki. Adapting the adapters for code-switching in multilingual ASR. 2023. ⟨hal-04238811⟩
63 Consultations
313 Téléchargements

Altmetric

Partager

More