Preprint has been submitted for publication in journal
Preprint / Version 1

HOW WELL CAN ASR TECHNOLOGY UNDERSTAND FOREIGN-ACCENTED SPEECH?

##article.authors##

DOI:

https://doi.org/10.1590/010318138668782v61n32022

Keywords:

intelligibility, automatic speech recognition, L2 pronunciation development, autonomous learning

Abstract

Following the Covid-19 pandemic, digital technology is more present in classrooms than ever. Automatic Speech Recognition (ASR) offers interesting possibilities for language learners to produce more output in a foreign language (FL). ASR is especially suited for autonomous pronunciation learning when used as a dictation tool that transcribes the learner’s speech (McCROCKLIN, 2016). However, ASR tools are trained with monolingual native speakers in mind, not reflecting the global reality of English speakers. Consequently, the present study examined how well two ASR-based dictation tools understand foreign-accented speech, and which FL speech features cause intelligibility breakdowns. English speech samples of 15 Brazilian Portuguese and 15 Spanish speakers were obtained from an online database (WEINBERGER, 2015) and submitted to two ASR dictation tools: Microsoft Word and VoiceNotebook. The resulting transcriptions were manually inspected, coded and categorized. The results show that overall intelligibility was high for both tools. However, many features of normal FL speech, such as vowel and consonant substitution, caused the ASR dictation tools to misinterpret the message leading to communication breakdowns. The results are discussed from a pedagogical viewpoint.

Downloads

Download data is not yet available.

Metrics

Metrics Loading ...

Posted

10/26/2022

How to Cite

Souza, H. K., & Gottardi, W. (2022). HOW WELL CAN ASR TECHNOLOGY UNDERSTAND FOREIGN-ACCENTED SPEECH?. In SciELO Preprints. https://doi.org/10.1590/010318138668782v61n32022

Section

Linguistic, literature and arts

Plaudit

Metrics