دورية أكاديمية

DS-Trans: A 3D Object Detection Method Based on a Deformable Spatiotemporal Transformer for Autonomous Vehicles

التفاصيل البيبلوغرافية
العنوان: DS-Trans: A 3D Object Detection Method Based on a Deformable Spatiotemporal Transformer for Autonomous Vehicles
المؤلفون: Yuan Zhu, Ruidong Xu, Chongben Tao, Hao An, Huaide Wang, Zhipeng Sun, Ke Lu
المصدر: Remote Sensing, Vol 16, Iss 9, p 1621 (2024)
بيانات النشر: MDPI AG, 2024.
سنة النشر: 2024
المجموعة: LCC:Science
مصطلحات موضوعية: autonomous vehicle, 3D object detection, Transformer, point clouds, Science
الوصف: Facing the significant challenge of 3D object detection in complex weather conditions and road environments, existing algorithms based on single-frame point cloud data struggle to achieve desirable results. These methods typically focus on spatial relationships within a single frame, overlooking the semantic correlations and spatiotemporal continuity between consecutive frames. This leads to discontinuities and abrupt changes in the detection outcomes. To address this issue, this paper proposes a multi-frame 3D object detection algorithm based on a deformable spatiotemporal Transformer. Specifically, a deformable cross-scale Transformer module is devised, incorporating a multi-scale offset mechanism that non-uniformly samples features at different scales, enhancing the spatial information aggregation capability of the output features. Simultaneously, to address the issue of feature misalignment during multi-frame feature fusion, a deformable cross-frame Transformer module is proposed. This module incorporates independently learnable offset parameters for different frame features, enabling the model to adaptively correlate dynamic features across multiple frames and improve the temporal information utilization of the model. A proposal-aware sampling algorithm is introduced to significantly increase the foreground point recall, further optimizing the efficiency of feature extraction. The obtained multi-scale and multi-frame voxel features are subjected to an adaptive fusion weight extraction module, referred to as the proposed mixed voxel set extraction module. This module allows the model to adaptively obtain mixed features containing both spatial and temporal information. The effectiveness of the proposed algorithm is validated on the KITTI, nuScenes, and self-collected urban datasets. The proposed algorithm achieves an average precision improvement of 2.1% over the latest multi-frame-based algorithms.
نوع الوثيقة: article
وصف الملف: electronic resource
اللغة: English
تدمد: 2072-4292
Relation: https://www.mdpi.com/2072-4292/16/9/1621; https://doaj.org/toc/2072-4292
DOI: 10.3390/rs16091621
URL الوصول: https://doaj.org/article/3840b4e5de9a4b149a91dc4f8764cf69
رقم الأكسشن: edsdoj.3840b4e5de9a4b149a91dc4f8764cf69
قاعدة البيانات: Directory of Open Access Journals
الوصف
تدمد:20724292
DOI:10.3390/rs16091621