Hybrid Video Diffusion Models with 2D Triplane and 3D Wavelet Representation

التفاصيل البيبلوغرافية
العنوان: Hybrid Video Diffusion Models with 2D Triplane and 3D Wavelet Representation
المؤلفون: Kim, Kihong, Lee, Haneol, Park, Jihye, Kim, Seyeon, Lee, Kwanghee, Kim, Seungryong, Yoo, Jaejun
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Computer Vision and Pattern Recognition
الوصف: Generating high-quality videos that synthesize desired realistic content is a challenging task due to their intricate high-dimensionality and complexity of videos. Several recent diffusion-based methods have shown comparable performance by compressing videos to a lower-dimensional latent space, using traditional video autoencoder architecture. However, such method that employ standard frame-wise 2D and 3D convolution fail to fully exploit the spatio-temporal nature of videos. To address this issue, we propose a novel hybrid video diffusion model, called HVDM, which can capture spatio-temporal dependencies more effectively. The HVDM is trained by a hybrid video autoencoder which extracts a disentangled representation of the video including: (i) a global context information captured by a 2D projected latent (ii) a local volume information captured by 3D convolutions with wavelet decomposition (iii) a frequency information for improving the video reconstruction. Based on this disentangled representation, our hybrid autoencoder provide a more comprehensive video latent enriching the generated videos with fine structures and details. Experiments on video generation benchamarks (UCF101, SkyTimelapse, and TaiChi) demonstrate that the proposed approach achieves state-of-the-art video generation quality, showing a wide range of video applications (e.g., long video generation, image-to-video, and video dynamics control).
Comment: Project page is available at https://hxngiee.github.io/HVDM/
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2402.13729
رقم الأكسشن: edsarx.2402.13729
قاعدة البيانات: arXiv