RGNet: A Unified Clip Retrieval and Grounding Network for Long Videos

التفاصيل البيبلوغرافية
العنوان: RGNet: A Unified Clip Retrieval and Grounding Network for Long Videos
المؤلفون: Hannan, Tanveer, Islam, Md Mohaiminul, Seidl, Thomas, Bertasius, Gedas
سنة النشر: 2023
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Computer Vision and Pattern Recognition
الوصف: Locating specific moments within long videos (20-120 minutes) presents a significant challenge, akin to finding a needle in a haystack. Adapting existing short video (5-30 seconds) grounding methods to this problem yields poor performance. Since most real life videos, such as those on YouTube and AR/VR, are lengthy, addressing this issue is crucial. Existing methods typically operate in two stages: clip retrieval and grounding. However, this disjoint process limits the retrieval module's fine-grained event understanding, crucial for specific moment detection. We propose RGNet which deeply integrates clip retrieval and grounding into a single network capable of processing long videos into multiple granular levels, e.g., clips and frames. Its core component is a novel transformer encoder, RG-Encoder, that unifies the two stages through shared features and mutual optimization. The encoder incorporates a sparse attention mechanism and an attention loss to model both granularity jointly. Moreover, we introduce a contrastive clip sampling technique to mimic the long video paradigm closely during training. RGNet surpasses prior methods, showcasing state-of-the-art performance on long video temporal grounding (LVTG) datasets MAD and Ego4D.
Comment: The code is released at https://github.com/Tanveer81/RGNet
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2312.06729
رقم الأكسشن: edsarx.2312.06729
قاعدة البيانات: arXiv