A unified query-based generative model for question generation and question answering

L Song, Z Wang, W Hamza�- arXiv preprint arXiv:1709.01058, 2017 - arxiv.org
arXiv preprint arXiv:1709.01058, 2017arxiv.org
We propose a query-based generative model for solving both tasks of question generation
(QG) and question an-swering (QA). The model follows the classic encoder-decoder
framework. The encoder takes a passage and a query as input then performs query
understanding by matching the query with the passage from multiple per-spectives. The
decoder is an attention-based Long Short Term Memory (LSTM) model with copy and
coverage mechanisms. In the QG task, a question is generated from the system given the�…
We propose a query-based generative model for solving both tasks of question generation (QG) and question an- swering (QA). The model follows the classic encoder- decoder framework. The encoder takes a passage and a query as input then performs query understanding by matching the query with the passage from multiple per- spectives. The decoder is an attention-based Long Short Term Memory (LSTM) model with copy and coverage mechanisms. In the QG task, a question is generated from the system given the passage and the target answer, whereas in the QA task, the answer is generated given the question and the passage. During the training stage, we leverage a policy-gradient reinforcement learning algorithm to overcome exposure bias, a major prob- lem resulted from sequence learning with cross-entropy loss. For the QG task, our experiments show higher per- formances than the state-of-the-art results. When used as additional training data, the automatically generated questions even improve the performance of a strong ex- tractive QA system. In addition, our model shows bet- ter performance than the state-of-the-art baselines of the generative QA task.
arxiv.org