Joint Speaker Features Learning for Audio-visual Multichannel Speech Separation and Recognition

التفاصيل البيبلوغرافية
العنوان: Joint Speaker Features Learning for Audio-visual Multichannel Speech Separation and Recognition
المؤلفون: Li, Guinan, Deng, Jiajun, Chen, Youjun, Geng, Mengzhe, Hu, Shujie, Li, Zhe, Jin, Zengrui, Wang, Tianzi, Xie, Xurong, Meng, Helen, Liu, Xunying
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Sound, Electrical Engineering and Systems Science - Audio and Speech Processing
الوصف: This paper proposes joint speaker feature learning methods for zero-shot adaptation of audio-visual multichannel speech separation and recognition systems. xVector and ECAPA-TDNN speaker encoders are connected using purpose-built fusion blocks and tightly integrated with the complete system training. Experiments conducted on LRS3-TED data simulated multichannel overlapped speech suggest that joint speaker feature learning consistently improves speech separation and recognition performance over the baselines without joint speaker feature estimation. Further analyses reveal performance improvements are strongly correlated with increased inter-speaker discrimination measured using cosine similarity. The best-performing joint speaker feature learning adapted system outperformed the baseline fine-tuned WavLM model by statistically significant WER reductions of 21.6% and 25.3% absolute (67.5% and 83.5% relative) on Dev and Test sets after incorporating WavLM features and video modality.
Comment: Accepted by Interspeech 2024
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2406.10152
رقم الأكسشن: edsarx.2406.10152
قاعدة البيانات: arXiv