تقرير
Predicting Expressive Speaking Style From Text In End-To-End Speech Synthesis
العنوان: | Predicting Expressive Speaking Style From Text In End-To-End Speech Synthesis |
---|---|
المؤلفون: | Stanton, Daisy, Wang, Yuxuan, Skerry-Ryan, RJ |
سنة النشر: | 2018 |
المجموعة: | Computer Science Statistics |
مصطلحات موضوعية: | Computer Science - Computation and Language, Computer Science - Machine Learning, Computer Science - Sound, Electrical Engineering and Systems Science - Audio and Speech Processing, Statistics - Machine Learning, eess.AS |
الوصف: | Global Style Tokens (GSTs) are a recently-proposed method to learn latent disentangled representations of high-dimensional data. GSTs can be used within Tacotron, a state-of-the-art end-to-end text-to-speech synthesis system, to uncover expressive factors of variation in speaking style. In this work, we introduce the Text-Predicted Global Style Token (TP-GST) architecture, which treats GST combination weights or style embeddings as "virtual" speaking style labels within Tacotron. TP-GST learns to predict stylistic renderings from text alone, requiring neither explicit labels during training nor auxiliary inputs for inference. We show that, when trained on a dataset of expressive speech, our system generates audio with more pitch and energy variation than two state-of-the-art baseline models. We further demonstrate that TP-GSTs can synthesize speech with background noise removed, and corroborate these analyses with positive results on human-rated listener preference audiobook tasks. Finally, we demonstrate that multi-speaker TP-GST models successfully factorize speaker identity and speaking style. We provide a website with audio samples for each of our findings. |
نوع الوثيقة: | Working Paper |
URL الوصول: | http://arxiv.org/abs/1808.01410 |
رقم الأكسشن: | edsarx.1808.01410 |
قاعدة البيانات: | arXiv |
الوصف غير متاح. |