Text-Guided Video Masked Autoencoder

التفاصيل البيبلوغرافية
العنوان: Text-Guided Video Masked Autoencoder
المؤلفون: Fan, David, Wang, Jue, Liao, Shuai, Zhang, Zhikang, Bhat, Vimal, Li, Xinyu
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Computer Vision and Pattern Recognition
الوصف: Recent video masked autoencoder (MAE) works have designed improved masking algorithms focused on saliency. These works leverage visual cues such as motion to mask the most salient regions. However, the robustness of such visual cues depends on how often input videos match underlying assumptions. On the other hand, natural language description is an information dense representation of video that implicitly captures saliency without requiring modality-specific assumptions, and has not been explored yet for video MAE. To this end, we introduce a novel text-guided masking algorithm (TGM) that masks the video regions with highest correspondence to paired captions. Without leveraging any explicit visual cues for saliency, our TGM is competitive with state-of-the-art masking algorithms such as motion-guided masking. To further benefit from the semantics of natural language for masked reconstruction, we next introduce a unified framework for joint MAE and masked video-text contrastive learning. We show that across existing masking algorithms, unifying MAE and masked video-text contrastive learning improves downstream performance compared to pure MAE on a variety of video recognition tasks, especially for linear probe. Within this unified framework, our TGM achieves the best relative performance on five action recognition and one egocentric datasets, highlighting the complementary nature of natural language for masked video modeling.
Comment: Accepted to ECCV 2024
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2408.00759
رقم الأكسشن: edsarx.2408.00759
قاعدة البيانات: arXiv