Answering while summarizing: Multi-task learning for multi-hop QA with evidence extraction
…, M Nagata, A Otsuka, I Saito, H Asano, J Tomita�- arXiv preprint arXiv�…, 2019 - arxiv.org
Question answering (QA) using textual sources for purposes such as reading comprehension
(RC) has attracted much attention. This study focuses on the task of explainable multi-hop …
(RC) has attracted much attention. This study focuses on the task of explainable multi-hop …
Multi-style generative reading comprehension
…, K Shinoda, A Otsuka, H Asano, J Tomita�- arXiv preprint arXiv�…, 2019 - arxiv.org
This study tackles generative reading comprehension (RC), which consists of answering
questions based on textual evidence and natural language generation (NLG). We propose a …
questions based on textual evidence and natural language generation (NLG). We propose a …
Retrieve-and-read: Multi-task learning of information retrieval and reading comprehension
…, I Saito, A Otsuka, H Asano, J Tomita�- Proceedings of the 27th�…, 2018 - dl.acm.org
This study considers the task of machine reading at scale (MRS) wherein, given a question,
a system first performs the information retrieval (IR) task of finding relevant passages in a …
a system first performs the information retrieval (IR) task of finding relevant passages in a …
A simple but effective method to incorporate multi-turn context with BERT for conversational machine comprehension
…, I Saito, K Nishida, H Asano, J Tomita�- arXiv preprint arXiv�…, 2019 - arxiv.org
Conversational machine comprehension (CMC) requires understanding the context of multi-turn
dialogue. Using BERT, a pre-training language model, has been successful for single-…
dialogue. Using BERT, a pre-training language model, has been successful for single-…
Commonsense knowledge base completion and generation
…, K Nishida, H Asano, J Tomita�- Proceedings of the 22nd�…, 2018 - aclanthology.org
This study focuses on acquisition of commonsense knowledge. A previous study proposed
a commonsense knowledge base completion (CKB completion) method that predicts a …
a commonsense knowledge base completion (CKB completion) method that predicts a …
Abstractive summarization with combination of pre-trained sequence-to-sequence and saliency models
Pre-trained sequence-to-sequence (seq-to-seq) models have significantly improved the
accuracy of several language generation tasks, including abstractive summarization. Although …
accuracy of several language generation tasks, including abstractive summarization. Although …
Role play-based question-answering by real users for building chatbots with consistent personalities
…, E Yamaguchi, N Adachi, J Tomita�- Proceedings of the�…, 2018 - aclanthology.org
Having consistent personalities is important for chatbots if we want them to be believable.
Typically, many question-answer pairs are prepared by hand for achieving consistent …
Typically, many question-answer pairs are prepared by hand for achieving consistent …
Analyzing gaze behavior and dialogue act during turn-taking for estimating empathy skill level
…, S Kumano, R Higashinaka, J Tomita�- Proceedings of the 20th�…, 2018 - dl.acm.org
We explored the gaze behavior towards the end of utterances and dialogue act (DA), ie,
verbal-behavior information indicating the intension of an utterance, during turn-keeping/…
verbal-behavior information indicating the intension of an utterance, during turn-keeping/…
Generating body motions using spoken language in dialogue
…, T Katayama, R Higashinaka, J Tomita�- Proceedings of the 18th�…, 2018 - dl.acm.org
We propose a model to automatically generate whole body motions accompanying
utterances at appropriate times, similar to humans, by using various types of natural-language-…
utterances at appropriate times, similar to humans, by using various types of natural-language-…
Length-controllable abstractive summarization by guiding with summary prototype
…, K Nishida, A Otsuka, H Asano, J Tomita…�- arXiv preprint arXiv�…, 2020 - arxiv.org
We propose a new length-controllable abstractive summarization model. Recent state-of-the-art
abstractive summarization models based on encoder-decoder models generate only …
abstractive summarization models based on encoder-decoder models generate only …