Exploring Self-Attention Mechanisms for Speech Separation

التفاصيل البيبلوغرافية
العنوان: Exploring Self-Attention Mechanisms for Speech Separation
المؤلفون: Subakan, Cem, Ravanelli, Mirco, Cornell, Samuele, Grondin, Francois, Bronzi, Mirko
سنة النشر: 2022
المجموعة: Computer Science
مصطلحات موضوعية: Electrical Engineering and Systems Science - Audio and Speech Processing, Computer Science - Machine Learning, Computer Science - Sound, Electrical Engineering and Systems Science - Signal Processing
الوصف: Transformers have enabled impressive improvements in deep learning. They often outperform recurrent and convolutional models in many tasks while taking advantage of parallel processing. Recently, we proposed the SepFormer, which obtains state-of-the-art performance in speech separation with the WSJ0-2/3 Mix datasets. This paper studies in-depth Transformers for speech separation. In particular, we extend our previous findings on the SepFormer by providing results on more challenging noisy and noisy-reverberant datasets, such as LibriMix, WHAM!, and WHAMR!. Moreover, we extend our model to perform speech enhancement and provide experimental evidence on denoising and dereverberation tasks. Finally, we investigate, for the first time in speech separation, the use of efficient self-attention mechanisms such as Linformers, Lonformers, and ReFormers. We found that they reduce memory requirements significantly. For example, we show that the Reformer-based attention outperforms the popular Conv-TasNet model on the WSJ0-2Mix dataset while being faster at inference and comparable in terms of memory consumption.
Comment: Accepted to IEEE/ACM Transactions on Audio, Speech, and Language Processing
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2202.02884
رقم الأكسشن: edsarx.2202.02884
قاعدة البيانات: arXiv