A Quantitative Evaluation of Natural Language Question Interpretation for Question Answering Systems

التفاصيل البيبلوغرافية
العنوان: A Quantitative Evaluation of Natural Language Question Interpretation for Question Answering Systems
المؤلفون: Asakura, Takuto, Kim, Jin-Dong, Yamamoto, Yasunori, Tateisi, Yuka, Takagi, Toshihisa
سنة النشر: 2018
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Computation and Language
الوصف: Systematic benchmark evaluation plays an important role in the process of improving technologies for Question Answering (QA) systems. While currently there are a number of existing evaluation methods for natural language (NL) QA systems, most of them consider only the final answers, limiting their utility within a black box style evaluation. Herein, we propose a subdivided evaluation approach to enable finer-grained evaluation of QA systems, and present an evaluation tool which targets the NL question (NLQ) interpretation step, an initial step of a QA pipeline. The results of experiments using two public benchmark datasets suggest that we can get a deeper insight about the performance of a QA system using the proposed approach, which should provide a better guidance for improving the systems, than using black box style approaches.
Comment: 16 pages, 6 figures, JIST 2018
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/1809.07485
رقم الأكسشن: edsarx.1809.07485
قاعدة البيانات: arXiv