Receptive-Field Regularized CNNs for Music Classification and Tagging

التفاصيل البيبلوغرافية
العنوان: Receptive-Field Regularized CNNs for Music Classification and Tagging
المؤلفون: Koutini, Khaled, Eghbal-Zadeh, Hamid, Haunschmid, Verena, Primus, Paul, Chowdhury, Shreyan, Widmer, Gerhard
سنة النشر: 2020
المجموعة: Computer Science
مصطلحات موضوعية: Electrical Engineering and Systems Science - Audio and Speech Processing, Computer Science - Machine Learning, Computer Science - Sound
الوصف: Convolutional Neural Networks (CNNs) have been successfully used in various Music Information Retrieval (MIR) tasks, both as end-to-end models and as feature extractors for more complex systems. However, the MIR field is still dominated by the classical VGG-based CNN architecture variants, often in combination with more complex modules such as attention, and/or techniques such as pre-training on large datasets. Deeper models such as ResNet -- which surpassed VGG by a large margin in other domains -- are rarely used in MIR. One of the main reasons for this, as we will show, is the lack of generalization of deeper CNNs in the music domain. In this paper, we present a principled way to make deep architectures like ResNet competitive for music-related tasks, based on well-designed regularization strategies. In particular, we analyze the recently introduced Receptive-Field Regularization and Shake-Shake, and show that they significantly improve the generalization of deep CNNs on music-related tasks, and that the resulting deep CNNs can outperform current more complex models such as CNNs augmented with pre-training and attention. We demonstrate this on two different MIR tasks and two corresponding datasets, thus offering our deep regularized CNNs as a new baseline for these datasets, which can also be used as a feature-extracting module in future, more complex approaches.
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2007.13503
رقم الأكسشن: edsarx.2007.13503
قاعدة البيانات: arXiv