Entropy-Enhanced Multimodal Attention Model for Scene-Aware Dialogue Generation

التفاصيل البيبلوغرافية
العنوان: Entropy-Enhanced Multimodal Attention Model for Scene-Aware Dialogue Generation
المؤلفون: Lin, Kuan-Yen, Hsu, Chao-Chun, Chen, Yun-Nung, Ku, Lun-Wei
سنة النشر: 2019
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Computation and Language
الوصف: With increasing information from social media, there are more and more videos available. Therefore, the ability to reason on a video is important and deserves to be discussed. TheDialog System Technology Challenge (DSTC7) (Yoshino et al. 2018) proposed an Audio Visual Scene-aware Dialog (AVSD) task, which contains five modalities including video, dialogue history, summary, and caption, as a scene-aware environment. In this paper, we propose the entropy-enhanced dynamic memory network (DMN) to effectively model video modality. The attention-based GRU in the proposed model can improve the model's ability to comprehend and memorize sequential information. The entropy mechanism can control the attention distribution higher, so each to-be-answered question can focus more specifically on a small set of video segments. After the entropy-enhanced DMN secures the video context, we apply an attention model that in-corporates summary and caption to generate an accurate answer given the question about the video. In the official evaluation, our system can achieve improved performance against the released baseline model for both subjective and objective evaluation metrics.
Comment: DSTC7 collocated with AAAI2019
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/1908.08191
رقم الأكسشن: edsarx.1908.08191
قاعدة البيانات: arXiv