Structured State Space Decoder for Speech Recognition and Synthesis

التفاصيل البيبلوغرافية
العنوان: Structured State Space Decoder for Speech Recognition and Synthesis
المؤلفون: Miyazaki, Koichi, Murata, Masato, Koriyama, Tomoki
سنة النشر: 2022
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Sound, Computer Science - Machine Learning, Electrical Engineering and Systems Science - Audio and Speech Processing
الوصف: Automatic speech recognition (ASR) systems developed in recent years have shown promising results with self-attention models (e.g., Transformer and Conformer), which are replacing conventional recurrent neural networks. Meanwhile, a structured state space model (S4) has been recently proposed, producing promising results for various long-sequence modeling tasks, including raw speech classification. The S4 model can be trained in parallel, same as the Transformer model. In this study, we applied S4 as a decoder for ASR and text-to-speech (TTS) tasks by comparing it with the Transformer decoder. For the ASR task, our experimental results demonstrate that the proposed model achieves a competitive word error rate (WER) of 1.88%/4.25% on LibriSpeech test-clean/test-other set and a character error rate (CER) of 3.80%/2.63%/2.98% on the CSJ eval1/eval2/eval3 set. Furthermore, the proposed model is more robust than the standard Transformer model, particularly for long-form speech on both the datasets. For the TTS task, the proposed method outperforms the Transformer baseline.
Comment: Submitted to ICASSP 2023
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2210.17098
رقم الأكسشن: edsarx.2210.17098
قاعدة البيانات: arXiv