EgoSonics: Generating Synchronized Audio for Silent Egocentric Videos

التفاصيل البيبلوغرافية
العنوان: EgoSonics: Generating Synchronized Audio for Silent Egocentric Videos
المؤلفون: Rai, Aashish, Sridhar, Srinath
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Computer Vision and Pattern Recognition, Computer Science - Multimedia, Computer Science - Sound, Electrical Engineering and Systems Science - Audio and Speech Processing
الوصف: We introduce EgoSonics, a method to generate semantically meaningful and synchronized audio tracks conditioned on silent egocentric videos. Generating audio for silent egocentric videos could open new applications in virtual reality, assistive technologies, or for augmenting existing datasets. Existing work has been limited to domains like speech, music, or impact sounds and cannot easily capture the broad range of audio frequencies found in egocentric videos. EgoSonics addresses these limitations by building on the strength of latent diffusion models for conditioned audio synthesis. We first encode and process audio and video data into a form that is suitable for generation. The encoded data is used to train our model to generate audio tracks that capture the semantics of the input video. Our proposed SyncroNet builds on top of ControlNet to provide control signals that enables temporal synchronization to the synthesized audio. Extensive evaluations show that our model outperforms existing work in audio quality, and in our newly proposed synchronization evaluation method. Furthermore, we demonstrate downstream applications of our model in improving video summarization.
Comment: preprint
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2407.20592
رقم الأكسشن: edsarx.2407.20592
قاعدة البيانات: arXiv