Generic 3D Diffusion Adapter Using Controlled Multi-View Editing

التفاصيل البيبلوغرافية
العنوان: Generic 3D Diffusion Adapter Using Controlled Multi-View Editing
المؤلفون: Chen, Hansheng, Shi, Ruoxi, Liu, Yulin, Shen, Bokui, Gu, Jiayuan, Wetzstein, Gordon, Su, Hao, Guibas, Leonidas
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Computer Vision and Pattern Recognition, Computer Science - Graphics
الوصف: Open-domain 3D object synthesis has been lagging behind image synthesis due to limited data and higher computational complexity. To bridge this gap, recent works have investigated multi-view diffusion but often fall short in either 3D consistency, visual quality, or efficiency. This paper proposes MVEdit, which functions as a 3D counterpart of SDEdit, employing ancestral sampling to jointly denoise multi-view images and output high-quality textured meshes. Built on off-the-shelf 2D diffusion models, MVEdit achieves 3D consistency through a training-free 3D Adapter, which lifts the 2D views of the last timestep into a coherent 3D representation, then conditions the 2D views of the next timestep using rendered views, without uncompromising visual quality. With an inference time of only 2-5 minutes, this framework achieves better trade-off between quality and speed than score distillation. MVEdit is highly versatile and extendable, with a wide range of applications including text/image-to-3D generation, 3D-to-3D editing, and high-quality texture synthesis. In particular, evaluations demonstrate state-of-the-art performance in both image-to-3D and text-guided texture generation tasks. Additionally, we introduce a method for fine-tuning 2D latent diffusion models on small 3D datasets with limited resources, enabling fast low-resolution text-to-3D initialization.
Comment: V2 note: Fix missing acknowledgements. Project page: https://lakonik.github.io/mvedit
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2403.12032
رقم الأكسشن: edsarx.2403.12032
قاعدة البيانات: arXiv