AliCHI: A Large-scale Multi-modal Dataset and Automated Evaluation Tool for Human-like Dialogue Systems

التفاصيل البيبلوغرافية
العنوان: AliCHI: A Large-scale Multi-modal Dataset and Automated Evaluation Tool for Human-like Dialogue Systems
المؤلفون: Luo, Zhiling, Shi, Qiankun, Zhao, Sha, Zhou, Wei, Chen, Haiqing, Ma, Yuankai, Leng, Haitao
سنة النشر: 2022
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Human-Computer Interaction
الوصف: A well-designed interactive human-like dialogue system is expected to take actions (e.g. smiling) and respond in a pattern similar to humans. However, due to the limitation of single-modality (only speech) or small volume of currently public datasets, most dialogue systems can only respond in speech and cannot take human-like actions. In this work, we build a large-scale multi-modal dataset of human-to-human conversation in a face-to-face fashion, with fine-grained annotations. The raw data in video format contains 635 dialogue sessions, being collected from 200 participants on designed topics and lasting 52 hours in total. Moreover, we manually annotated the verbal and non-verbal behaviors in each dialogue session on their start/end timestamp. Furthermore, we developed a corresponding evaluation tool for human-like dialogue systems to automatically evaluates the accuracy of two basic tasks, turn-taking prediction, and backchannel prediction, on both time and content. We have opened the data, the tools will be released at the conference.
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2212.05489
رقم الأكسشن: edsarx.2212.05489
قاعدة البيانات: arXiv