EmoFace: Emotion-Content Disentangled Speech-Driven 3D Talking Face with Mesh Attention

التفاصيل البيبلوغرافية
العنوان: EmoFace: Emotion-Content Disentangled Speech-Driven 3D Talking Face with Mesh Attention
المؤلفون: Lin, Yihong, Peng, Liang, Hu, Jianqiao, Li, Xiandong, Kang, Wenxiong, Lei, Songju, Wu, Xianjia, Xu, Huang
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Computer Vision and Pattern Recognition
الوصف: The creation of increasingly vivid 3D virtual digital humans has become a hot topic in recent years. Currently, most speech-driven work focuses on training models to learn the relationship between phonemes and visemes to achieve more realistic lips. However, they fail to capture the correlations between emotions and facial expressions effectively. To solve this problem, we propose a new model, termed EmoFace. EmoFace employs a novel Mesh Attention mechanism, which helps to learn potential feature dependencies between mesh vertices in time and space. We also adopt, for the first time to our knowledge, an effective self-growing training scheme that combines teacher-forcing and scheduled sampling in a 3D face animation task. Additionally, since EmoFace is an autoregressive model, there is no requirement that the first frame of the training data must be a silent frame, which greatly reduces the data limitations and contributes to solve the current dilemma of insufficient datasets. Comprehensive quantitative and qualitative evaluations on our proposed high-quality reconstructed 3D emotional facial animation dataset, 3D-RAVDESS ($5.0343\times 10^{-5}$mm for LVE and $1.0196\times 10^{-5}$mm for EVE), and publicly available dataset VOCASET ($2.8669\times 10^{-5}$mm for LVE and $0.4664\times 10^{-5}$mm for EVE), demonstrate that our algorithm achieves state-of-the-art performance.
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2408.11518
رقم الأكسشن: edsarx.2408.11518
قاعدة البيانات: arXiv