Guanhua Zhang, Bing Bai, et al.
ACL 2019
The paper presents a first attempt towards unsupervised neural text simplification that relies only on unlabeled text corpora. The core framework is composed of a shared encoder and a pair of attentional-decoders, crucially assisted by discrimination-based losses and denoising. The framework is trained using unlabeled text collected from en-Wikipedia dump. Our analysis (both quantitative and qualitative involving human evaluators) on public test data shows that the proposed model can perform text-simplification at both lexical and syntactic levels, competitive to existing supervised methods. It also outperforms viable unsupervised baselines. Adding a few labeled pairs helps improve the performance further.
Guanhua Zhang, Bing Bai, et al.
ACL 2019
Yufang Hou, Charles Jochim, et al.
ACL 2019
Preksha Nema, Mitesh M. Khapra, et al.
ACL 2017
Parag Jain, Abhijit Mishra, et al.
AAAI 2019