Fake it to make it: Using synthetic data to remedy the data shortage in joint multimodal speech-and-gesture synthesis

التفاصيل البيبلوغرافية
العنوان: Fake it to make it: Using synthetic data to remedy the data shortage in joint multimodal speech-and-gesture synthesis
المؤلفون: Mehta, Shivam, Deichler, Anna, O'Regan, Jim, Moëll, Birger, Beskow, Jonas, Henter, Gustav Eje, Alexanderson, Simon
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Human-Computer Interaction, Computer Science - Computer Vision and Pattern Recognition, Computer Science - Graphics, Computer Science - Sound, Electrical Engineering and Systems Science - Audio and Speech Processing, 68T07 (Primary), 68T42 (Secondary), I.2.7, I.2.6, H.5
الوصف: Although humans engaged in face-to-face conversation simultaneously communicate both verbally and non-verbally, methods for joint and unified synthesis of speech audio and co-speech 3D gesture motion from text are a new and emerging field. These technologies hold great promise for more human-like, efficient, expressive, and robust synthetic communication, but are currently held back by the lack of suitably large datasets, as existing methods are trained on parallel data from all constituent modalities. Inspired by student-teacher methods, we propose a straightforward solution to the data shortage, by simply synthesising additional training material. Specifically, we use unimodal synthesis models trained on large datasets to create multimodal (but synthetic) parallel training data, and then pre-train a joint synthesis model on that material. In addition, we propose a new synthesis architecture that adds better and more controllable prosody modelling to the state-of-the-art method in the field. Our results confirm that pre-training on large amounts of synthetic data improves the quality of both the speech and the motion synthesised by the multimodal model, with the proposed architecture yielding further benefits when pre-trained on the synthetic data. See https://shivammehta25.github.io/MAGI/ for example output.
Comment: 13+1 pages, 2 figures, accepted at the Human Motion Generation workshop (HuMoGen) at CVPR 2024
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2404.19622
رقم الأكسشن: edsarx.2404.19622
قاعدة البيانات: arXiv