Learning Long-form Video Prior via Generative Pre-Training

التفاصيل البيبلوغرافية
العنوان: Learning Long-form Video Prior via Generative Pre-Training
المؤلفون: Xie, Jinheng, Feng, Jiajun, Tian, Zhaoxu, Lin, Kevin Qinghong, Huang, Yawen, Xia, Xi, Gong, Nanxu, Zuo, Xu, Yang, Jiaqi, Zheng, Yefeng, Shou, Mike Zheng
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Computer Vision and Pattern Recognition
الوصف: Concepts involved in long-form videos such as people, objects, and their interactions, can be viewed as following an implicit prior. They are notably complex and continue to pose challenges to be comprehensively learned. In recent years, generative pre-training (GPT) has exhibited versatile capacities in modeling any kind of text content even visual locations. Can this manner work for learning long-form video prior? Instead of operating on pixel space, it is efficient to employ visual locations like bounding boxes and keypoints to represent key information in videos, which can be simply discretized and then tokenized for consumption by GPT. Due to the scarcity of suitable data, we create a new dataset called \textbf{Storyboard20K} from movies to serve as a representative. It includes synopses, shot-by-shot keyframes, and fine-grained annotations of film sets and characters with consistent IDs, bounding boxes, and whole body keypoints. In this way, long-form videos can be represented by a set of tokens and be learned via generative pre-training. Experimental results validate that our approach has great potential for learning long-form video prior. Code and data will be released at \url{https://github.com/showlab/Long-form-Video-Prior}.
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2404.15909
رقم الأكسشن: edsarx.2404.15909
قاعدة البيانات: arXiv