Vishal Sunder, Samuel Thomas, et al.
ICASSP 2022
Large-scale language models (LLMs) such as GPT-2, BERT and RoBERTa have been successfully applied to ASR N-best rescoring. However, whether or how they can benefit competitive, near state-of-the-art ASR systems remains unexplored. In this study, we incorporate LLM rescoring into one of the most competitive ASR baselines: the Conformer-Transducer model. We demonstrate that consistent improvement is achieved by the LLM's bidirectionality, pretraining, in-domain finetuning and context augmentation. Furthermore, our lexical analysis sheds light on how each of these components may be contributing to the ASR performance.
Vishal Sunder, Samuel Thomas, et al.
ICASSP 2022
Xiaodong Cui, George Saon, et al.
INTERSPEECH 2023
Gakuto Kurata, Kartik Audhkhasi
INTERSPEECH 2019
George Saon, Gakuto Kurata, et al.
INTERSPEECH 2017