Conference paper
Deep structured energy based models for anomaly detection
Shuangfei Zhai, Yu Cheng, et al.
ICML 2016
Recurrent neural network (RNN) is recognized as a powerful language model (LM). We investigate deeper into its performance portfolio, which performs well on frequent grammatical patterns but much less so on less frequent terms. Such portfolio is expected and desirable in applications like autocomplete, but is less useful in social content analysis where many creative, unexpected usages occur (e.g., URL insertion). We adapt a generic RNN model and show that, with variational training corpora and epoch unfolding, the model improves its performance for the task of URL insertion suggestions.
Shuangfei Zhai, Yu Cheng, et al.
ICML 2016
Yuan Luo, Yu Cheng, et al.
JAMIA
Zhengping Che, Yu Cheng, et al.
ICDM 2017
Zhaonan Sun, Soumya Ghosh, et al.
JAMIA Open