Alignment in ASR and L1 listeners’ recognition of L2 learner speech: A replication study - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2022

Alignment in ASR and L1 listeners’ recognition of L2 learner speech: A replication study

Résumé

Automatic Speech Recognition (ASR) programs could provide useful feedback to L2 pronunciation learners (McCrocklin, 2016; Levis & Suvorov, 2020). Many researchers have explored their potential in L2 learning, including learner perceptions of ASR for learning a vowel and a suprasegmental feature in L2 French (Liakin, Cardoso, & Liakina, 2017), and learner beliefs about ASR’s general usefulness and for learning vowel contrasts (Inceoglu et al. 2020). Some have examined different programs’ accuracy (McCrocklin et al., 2019), or how programs perform compared to native listeners (Inceoglu et al., forthcoming). The latter assessed English spoken by Taiwanese intermediate learners, using L1 English listeners and the Google Voice Typing dictation system. We replicated their study using a different ASR tool (dictation.io) and speakers of a different L1 (French instead of Chinese) yet with a similar low-to-intermediate proficiency level. French-accented English is interesting as it may pose ASR challenges similar to Spanish-accented English; McCrocklin and Edalatishams (2020) found no significant correlations between the accuracy of Google’s ASR output for L1 Spanish learners and measures of recognition, comprehensibility and accentedness. This raises the issue of whether different ASR tools may suit different learner L1s. Our pilot compares intelligibility (or recognition) assessments of ten L1 English listeners and the output of the dictation.io program. The rated speech was L2 English from four L1 French speakers and intelligibility was measured by word transcription. One research question is: How (mis)aligned are ASR outputs & L1 listeners’ transcriptions?, a sub-question being ‘How accurate is dictation.io currently for this L2 English?’. Listeners were asked to use standard English orthography to transcribe 76 monosyllabic words (19 from each of 4 speakers) elicited from a word-reading task. Their transcriptions were compared to the ASR output, using the speaker’s first production. Error types were classified following Inceoglu et al. (forthcoming), for either an incorrect vowel or incorrect consonant, or multiple combined errors. Additionally, recordings of read-aloud sentences were rated on a Likert-scale for comprehensibility, operationalized as amount of effort required to understand. This was necessary for the second research question: Does the accuracy of dictation.io for this L2 speech correlate with human listener recognition (intelligibility) and with their ratings of comprehensibility? The error types and proportions were generally consistent with those of Inceoglu et al.’s findings for the ASR system and the L1 listeners. This supports their idea that current ASR technology may be particularly useful for lower proficiency learners, with some pedagogical provisos.   References •dictation.io. (2022). [https://dictation.io/]. •Inceoglu, S., Chen, W-H, & Lim, H. (forthcoming). Assessment of L2 intelligibility speech: A comparison of native listeners and ASR technology. ReCALL. •Inceoglu, S., Lim, H., & Chen, W-H. (2020). ASR for EFL pronunciation practice: Segmental development and learners' beliefs. The Journal of Asia TEFL, 17(3), 824-840. •Levis, J., & Suvorov, R. (2013). Automatic speech recognition. In C. A. Chapelle (Ed.), The Encyclopedia of Applied Linguistics (1st ed., Vol. 1–10, pp. 423–430). Wiley. https://doi.org/10.1002/9781405198431.wbeal0066.pub2 •Liakin, D., Cardoso, W., & Liakina, N. (2017). Mobilizing instruction in a second-language context: Learners’ perceptions of two speech technologies. Languages, 2(3), 11. https://doi.org/10.3390/languages2030011 •McCrocklin, S., & Edalatishams, I. (2020). Revisiting popular speech recognition software for ESL speech. TESOL Quarterly, 54(4), 1086–1097. https://doi.org/10.1002/tesq.3006 •McCrocklin, S., Humaidan, A., & Edalatishams, I. (2019). ASR dictation program accuracy: Have current programs improved? In J. M. Levis, C. Nagle, & E. Todey (Eds.), Proceedings of the 10th Pronunciation in Second Language Learning and Teaching Conference (pp. 191–200). https://iastate.box.com/shared/static/wtnv3yg890ze2ibtkihwdpts7bfojt8h.pdf •McCrocklin, S. M. (2016). Pronunciation learner autonomy: The potential of Automatic Speech Recognition. System, 57, 25–42. https://doi.org/10.1016/j.system.2015.12.013
Fichier non déposé

Dates et versions

hal-03929160 , version 1 (08-01-2023)

Identifiants

  • HAL Id : hal-03929160 , version 1

Citer

Vincent Chanethom, Alice Henderson. Alignment in ASR and L1 listeners’ recognition of L2 learner speech: A replication study. 15th International Conference on Native and Non-native Accents of English, Université de Łódź, Dec 2022, Łódź, Poland. ⟨hal-03929160⟩

Collections

UGA TICE LIDILEM
32 Consultations
0 Téléchargements

Partager

Gmail Facebook X LinkedIn More