VEnvision3D: A Synthetic Perception Dataset for 3D Multi-Task Model Research

التفاصيل البيبلوغرافية
العنوان: VEnvision3D: A Synthetic Perception Dataset for 3D Multi-Task Model Research
المؤلفون: Zhou, Jiahao, Long, Chen, Xie, Yue, Wang, Jialiang, Li, Boheng, Wang, Haiping, Chen, Zhe, Dong, Zhen
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Computer Vision and Pattern Recognition
الوصف: Developing a unified multi-task foundation model has become a critical challenge in computer vision research. In the current field of 3D computer vision, most datasets only focus on single task, which complicates the concurrent training requirements of various downstream tasks. In this paper, we introduce VEnvision3D, a large 3D synthetic perception dataset for multi-task learning, including depth completion, segmentation, upsampling, place recognition, and 3D reconstruction. Since the data for each task is collected in the same environmental domain, sub-tasks are inherently aligned in terms of the utilized data. Therefore, such a unique attribute can assist in exploring the potential for the multi-task model and even the foundation model without separate training methods. Meanwhile, capitalizing on the advantage of virtual environments being freely editable, we implement some novel settings such as simulating temporal changes in the environment and sampling point clouds on model surfaces. These characteristics enable us to present several new benchmarks. We also perform extensive studies on multi-task end-to-end models, revealing new observations, challenges, and opportunities for future research. Our dataset and code will be open-sourced upon acceptance.
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2402.19059
رقم الأكسشن: edsarx.2402.19059
قاعدة البيانات: arXiv