Unleashing HyDRa: Hybrid Fusion, Depth Consistency and Radar for Unified 3D Perception

التفاصيل البيبلوغرافية
العنوان: Unleashing HyDRa: Hybrid Fusion, Depth Consistency and Radar for Unified 3D Perception
المؤلفون: Wolters, Philipp, Gilg, Johannes, Teepe, Torben, Herzog, Fabian, Laouichi, Anouar, Hofmann, Martin, Rigoll, Gerhard
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Computer Vision and Pattern Recognition
الوصف: Low-cost, vision-centric 3D perception systems for autonomous driving have made significant progress in recent years, narrowing the gap to expensive LiDAR-based methods. The primary challenge in becoming a fully reliable alternative lies in robust depth prediction capabilities, as camera-based systems struggle with long detection ranges and adverse lighting and weather conditions. In this work, we introduce HyDRa, a novel camera-radar fusion architecture for diverse 3D perception tasks. Building upon the principles of dense BEV (Bird's Eye View)-based architectures, HyDRa introduces a hybrid fusion approach to combine the strengths of complementary camera and radar features in two distinct representation spaces. Our Height Association Transformer module leverages radar features already in the perspective view to produce more robust and accurate depth predictions. In the BEV, we refine the initial sparse representation by a Radar-weighted Depth Consistency. HyDRa achieves a new state-of-the-art for camera-radar fusion of 64.2 NDS (+1.8) and 58.4 AMOTA (+1.5) on the public nuScenes dataset. Moreover, our new semantically rich and spatially accurate BEV features can be directly converted into a powerful occupancy representation, beating all previous camera-based methods on the Occ3D benchmark by an impressive 3.7 mIoU. Code and models are available at https://github.com/phi-wol/hydra.
Comment: 10 pages, 4 figures Added eval on VoD
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2403.07746
رقم الأكسشن: edsarx.2403.07746
قاعدة البيانات: arXiv