A task in a suit and a tie: paraphrase generation with semantic augmentation

S Wang, R Gupta, N Chang, J Baldridge�- Proceedings of the AAAI�…, 2019 - ojs.aaai.org
Proceedings of the AAAI Conference on Artificial Intelligence, 2019ojs.aaai.org
Paraphrasing is rooted in semantics. We show the effectiveness of transformers (Vaswani et
al. 2017) for paraphrase generation and further improvements by incorporating PropBank
labels via a multi-encoder. Evaluating on MSCOCO and WikiAnswers, we find that
transformers are fast and effective, and that semantic augmentation for both transformers
and LSTMs leads to sizable 2-3 point gains in BLEU, METEOR and TER. More importantly,
we find surprisingly large gains on human evaluations compared to previous models�…
Abstract
Paraphrasing is rooted in semantics. We show the effectiveness of transformers (Vaswani et al. 2017) for paraphrase generation and further improvements by incorporating PropBank labels via a multi-encoder. Evaluating on MSCOCO and WikiAnswers, we find that transformers are fast and effective, and that semantic augmentation for both transformers and LSTMs leads to sizable 2-3 point gains in BLEU, METEOR and TER. More importantly, we find surprisingly large gains on human evaluations compared to previous models. Nevertheless, manual inspection of generated paraphrases reveals ample room for improvement: even our best model produces human-acceptable paraphrases for only 28% of captions from the CHIA dataset (Sharma et al. 2018), and it fails spectacularly on sentences from Wikipedia. Overall, these results point to the potential for incorporating semantics in the task while highlighting the need for stronger evaluation.
ojs.aaai.org
Showing the best result for this search. See all results