SparseFusion: Efficient Sparse Multi-Modal Fusion Framework for Long-Range 3D Perception

التفاصيل البيبلوغرافية
العنوان: SparseFusion: Efficient Sparse Multi-Modal Fusion Framework for Long-Range 3D Perception
المؤلفون: Li, Yiheng, Li, Hongyang, Huang, Zehao, Chang, Hong, Wang, Naiyan
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Computer Vision and Pattern Recognition
الوصف: Multi-modal 3D object detection has exhibited significant progress in recent years. However, most existing methods can hardly scale to long-range scenarios due to their reliance on dense 3D features, which substantially escalate computational demands and memory usage. In this paper, we introduce SparseFusion, a novel multi-modal fusion framework fully built upon sparse 3D features to facilitate efficient long-range perception. The core of our method is the Sparse View Transformer module, which selectively lifts regions of interest in 2D image space into the unified 3D space. The proposed module introduces sparsity from both semantic and geometric aspects which only fill grids that foreground objects potentially reside in. Comprehensive experiments have verified the efficiency and effectiveness of our framework in long-range 3D perception. Remarkably, on the long-range Argoverse2 dataset, SparseFusion reduces memory footprint and accelerates the inference by about two times compared to dense detectors. It also achieves state-of-the-art performance with mAP of 41.2% and CDS of 32.1%. The versatility of SparseFusion is also validated in the temporal object detection task and 3D lane detection task. Codes will be released upon acceptance.
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2403.10036
رقم الأكسشن: edsarx.2403.10036
قاعدة البيانات: arXiv