Unified speech and gesture synthesis using flow matching

التفاصيل البيبلوغرافية
العنوان: Unified speech and gesture synthesis using flow matching
المؤلفون: Mehta, Shivam, Tu, Ruibo, Alexanderson, Simon, Beskow, Jonas, Székely, Éva, Henter, Gustav Eje
سنة النشر: 2023
المجموعة: Computer Science
مصطلحات موضوعية: Electrical Engineering and Systems Science - Audio and Speech Processing, Computer Science - Graphics, Computer Science - Human-Computer Interaction, Computer Science - Machine Learning, Computer Science - Sound, 68T07 (Primary), 68T42 (Secondary), I.2.7, I.2.6, H.5
الوصف: As text-to-speech technologies achieve remarkable naturalness in read-aloud tasks, there is growing interest in multimodal synthesis of verbal and non-verbal communicative behaviour, such as spontaneous speech and associated body gestures. This paper presents a novel, unified architecture for jointly synthesising speech acoustics and skeleton-based 3D gesture motion from text, trained using optimal-transport conditional flow matching (OT-CFM). The proposed architecture is simpler than the previous state of the art, has a smaller memory footprint, and can capture the joint distribution of speech and gestures, generating both modalities together in one single process. The new training regime, meanwhile, enables better synthesis quality in much fewer steps (network evaluations) than before. Uni- and multimodal subjective tests demonstrate improved speech naturalness, gesture human-likeness, and cross-modal appropriateness compared to existing benchmarks. Please see https://shivammehta25.github.io/Match-TTSG/ for video examples and code.
Comment: 5 pages, 1 figure. Final version, accepted to IEEE ICASSP 2024
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2310.05181
رقم الأكسشن: edsarx.2310.05181
قاعدة البيانات: arXiv