Semantic Gesticulator: Semantics-Aware Co-Speech Gesture Synthesis

التفاصيل البيبلوغرافية
العنوان: Semantic Gesticulator: Semantics-Aware Co-Speech Gesture Synthesis
المؤلفون: Zhang, Zeyi, Ao, Tenglong, Zhang, Yuyao, Gao, Qingzhe, Lin, Chuan, Chen, Baoquan, Liu, Libin
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Graphics, Computer Science - Computer Vision and Pattern Recognition, Computer Science - Sound, Electrical Engineering and Systems Science - Audio and Speech Processing
الوصف: In this work, we present Semantic Gesticulator, a novel framework designed to synthesize realistic gestures accompanying speech with strong semantic correspondence. Semantically meaningful gestures are crucial for effective non-verbal communication, but such gestures often fall within the long tail of the distribution of natural human motion. The sparsity of these movements makes it challenging for deep learning-based systems, trained on moderately sized datasets, to capture the relationship between the movements and the corresponding speech semantics. To address this challenge, we develop a generative retrieval framework based on a large language model. This framework efficiently retrieves suitable semantic gesture candidates from a motion library in response to the input speech. To construct this motion library, we summarize a comprehensive list of commonly used semantic gestures based on findings in linguistics, and we collect a high-quality motion dataset encompassing both body and hand movements. We also design a novel GPT-based model with strong generalization capabilities to audio, capable of generating high-quality gestures that match the rhythm of speech. Furthermore, we propose a semantic alignment mechanism to efficiently align the retrieved semantic gestures with the GPT's output, ensuring the naturalness of the final animation. Our system demonstrates robustness in generating gestures that are rhythmically coherent and semantically explicit, as evidenced by a comprehensive collection of examples. User studies confirm the quality and human-likeness of our results, and show that our system outperforms state-of-the-art systems in terms of semantic appropriateness by a clear margin.
Comment: SIGGRAPH 2024 (Journal Track); Project page: https://pku-mocca.github.io/Semantic-Gesticulator-Page
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2405.09814
رقم الأكسشن: edsarx.2405.09814
قاعدة البيانات: arXiv