Michael Picheny, Zoltan Tuske, et al.
INTERSPEECH 2019
The performance of automatic speech recognition systems degrades with increasing mismatch between the training and testing scenarios. Differences in speaker accents are a significant source of such mismatch. The traditional approach to deal with multiple accents involves pooling data from several accents during training and building a single model in multi-task fashion, where tasks correspond to individual accents. In this paper, we explore an alternate model where we jointly learn an accent classifier and a multi-task acoustic model. Experiments on the American English Wall Street Journal and British English Cambridge corpora demonstrate that our joint model outperforms the strong multi-task acoustic model baseline. We obtain a 5.94% relative improvement in word error rate on British English, and 9.47% relative improvement on American English. This illustrates that jointly modeling with accent information improves acoustic model performance.
Michael Picheny, Zoltan Tuske, et al.
INTERSPEECH 2019
Victor Soto, Lidia Mangu, et al.
INTERSPEECH 2014
Gakuto Kurata, Kartik Audhkhasi
INTERSPEECH 2019
Andrew Rosenberg, Raul Fernandez, et al.
ICASSP 2018