Small-E: Small Language Model with Linear Attention for Efficient Speech Synthesis

التفاصيل البيبلوغرافية
العنوان: Small-E: Small Language Model with Linear Attention for Efficient Speech Synthesis
المؤلفون: Lemerle, Théodor, Obin, Nicolas, Roebel, Axel
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Electrical Engineering and Systems Science - Audio and Speech Processing, Computer Science - Computation and Language, Computer Science - Sound
الوصف: Recent advancements in text-to-speech (TTS) powered by language models have showcased remarkable capabilities in achieving naturalness and zero-shot voice cloning. Notably, the decoder-only transformer is the prominent architecture in this domain. However, transformers face challenges stemming from their quadratic complexity in sequence length, impeding training on lengthy sequences and resource-constrained hardware. Moreover they lack specific inductive bias with regards to the monotonic nature of TTS alignments. In response, we propose to replace transformers with emerging recurrent architectures and introduce specialized cross-attention mechanisms for reducing repeating and skipping issues. Consequently our architecture can be efficiently trained on long samples and achieve state-of-the-art zero-shot voice cloning against baselines of comparable size. Our implementation and demos are available at https://github.com/theodorblackbird/lina-speech.
Comment: Interspeech
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2406.04467
رقم الأكسشن: edsarx.2406.04467
قاعدة البيانات: arXiv