A quantitative analysis of OS noise
Alessandro Morari, Roberto Gioiosa, et al.
IPDPS 2011
In this paper, we propose a bilevel joint unsupervised and supervised training (BL-JUST) framework for automatic speech recognition. Compared to the conventional pretraining and fine-tuning strategy which is a disconnected twostage process, BL-JUST tries to optimize an acoustic model such that it simultaneously minimizes both the unsupervised and supervised loss functions. Because BL-JUST seeks matched local optima of both loss functions, acoustic representations learned by the acoustic model strike a good balance between being generic and task-specific. We solve the BL-JUST problem using penaltybased bilevel gradient descent and evaluate the trained deep neural network acoustic models on various datasets with a variety of architectures and loss functions. We show that BL-JUST can outperform the widely-used pre-training and fine-tuning strategy and some other popular semi-supervised techniques.
Alessandro Morari, Roberto Gioiosa, et al.
IPDPS 2011
Raymond Wu, Jie Lu
ITA Conference 2007
Yigal Hoffner, Simon Field, et al.
EDOC 2004
Leo Liberti, James Ostrowski
Journal of Global Optimization