Papez: Resource-Efficient Speech Separation with Auditory Working Memory

التفاصيل البيبلوغرافية
العنوان: Papez: Resource-Efficient Speech Separation with Auditory Working Memory
المؤلفون: Oh, Hyunseok, Yi, Juheon, Lee, Youngki
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Sound, Computer Science - Computation and Language, Computer Science - Machine Learning, Electrical Engineering and Systems Science - Audio and Speech Processing
الوصف: Transformer-based models recently reached state-of-the-art single-channel speech separation accuracy; However, their extreme computational load makes it difficult to deploy them in resource-constrained mobile or IoT devices. We thus present Papez, a lightweight and computation-efficient single-channel speech separation model. Papez is based on three key techniques. We first replace the inter-chunk Transformer with small-sized auditory working memory. Second, we adaptively prune the input tokens that do not need further processing. Finally, we reduce the number of parameters through the recurrent transformer. Our extensive evaluation shows that Papez achieves the best resource and accuracy tradeoffs with a large margin. We publicly share our source code at \texttt{https://github.com/snuhcs/Papez}
Comment: 5 pages. Accepted by ICASSP 2023
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2407.00888
رقم الأكسشن: edsarx.2407.00888
قاعدة البيانات: arXiv