Unsupervised Learning of Deep Features for Music Segmentation

التفاصيل البيبلوغرافية
العنوان: Unsupervised Learning of Deep Features for Music Segmentation
المؤلفون: McCallum, Matthew C.
المصدر: ICASSP 2019
سنة النشر: 2021
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Sound, Computer Science - Machine Learning, Computer Science - Multimedia, Electrical Engineering and Systems Science - Audio and Speech Processing
الوصف: Music segmentation refers to the dual problem of identifying boundaries between, and labeling, distinct music segments, e.g., the chorus, verse, bridge etc. in popular music. The performance of a range of music segmentation algorithms has been shown to be dependent on the audio features chosen to represent the audio. Some approaches have proposed learning feature transformations from music segment annotation data, although, such data is time consuming or expensive to create and as such these approaches are likely limited by the size of their datasets. While annotated music segmentation data is a scarce resource, the amount of available music audio is much greater. In the neighboring field of semantic audio unsupervised deep learning has shown promise in improving the performance of solutions to the query-by-example and sound classification tasks. In this work, unsupervised training of deep feature embeddings using convolutional neural networks (CNNs) is explored for music segmentation. The proposed techniques exploit only the time proximity of audio features that is implicit in any audio timeline. Employing these embeddings in a classic music segmentation algorithm is shown not only to significantly improve the performance of this algorithm, but obtain state of the art performance in unsupervised music segmentation.
نوع الوثيقة: Working Paper
DOI: 10.1109/ICASSP.2019.8683407
URL الوصول: http://arxiv.org/abs/2108.12955
رقم الأكسشن: edsarx.2108.12955
قاعدة البيانات: arXiv
الوصف
DOI:10.1109/ICASSP.2019.8683407