Look Hear: Gaze Prediction for Speech-directed Human Attention

التفاصيل البيبلوغرافية
العنوان: Look Hear: Gaze Prediction for Speech-directed Human Attention
المؤلفون: Mondal, Sounak, Ahn, Seoyoung, Yang, Zhibo, Balasubramanian, Niranjan, Samaras, Dimitris, Zelinsky, Gregory, Hoai, Minh
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Computer Vision and Pattern Recognition
الوصف: For computer systems to effectively interact with humans using spoken language, they need to understand how the words being generated affect the users' moment-by-moment attention. Our study focuses on the incremental prediction of attention as a person is seeing an image and hearing a referring expression defining the object in the scene that should be fixated by gaze. To predict the gaze scanpaths in this incremental object referral task, we developed the Attention in Referral Transformer model or ART, which predicts the human fixations spurred by each word in a referring expression. ART uses a multimodal transformer encoder to jointly learn gaze behavior and its underlying grounding tasks, and an autoregressive transformer decoder to predict, for each word, a variable number of fixations based on fixation history. To train ART, we created RefCOCO-Gaze, a large-scale dataset of 19,738 human gaze scanpaths, corresponding to 2,094 unique image-expression pairs, from 220 participants performing our referral task. In our quantitative and qualitative analyses, ART not only outperforms existing methods in scanpath prediction, but also appears to capture several human attention patterns, such as waiting, scanning, and verification.
Comment: Accepted for ECCV 2024
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2407.19605
رقم الأكسشن: edsarx.2407.19605
قاعدة البيانات: arXiv