EMOVOME Database: Advancing Emotion Recognition in Speech Beyond Staged Scenarios

التفاصيل البيبلوغرافية
العنوان: EMOVOME Database: Advancing Emotion Recognition in Speech Beyond Staged Scenarios
المؤلفون: Gómez-Zaragozá, Lucía, del Amor, Rocío, Castro-Bleda, María José, Naranjo, Valery, Raya, Mariano Alcañiz, Marín-Morales, Javier
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Electrical Engineering and Systems Science - Audio and Speech Processing, Computer Science - Artificial Intelligence, Computer Science - Computation and Language, Computer Science - Sound, I.5.1, I.5.4
الوصف: Natural databases for Speech Emotion Recognition (SER) are scarce and often rely on staged scenarios, such as films or television shows, limiting their application in real-world contexts. We developed and publicly released the Emotional Voice Messages (EMOVOME) database, including 999 voice messages from real conversations of 100 Spanish speakers on a messaging app, labeled in continuous and discrete emotions by expert and non-expert annotators. We evaluated speaker-independent SER models using a standard set of acoustic features and transformer-based models. We compared the results with reference databases including acted and elicited speech, and analyzed the influence of annotators and gender fairness. The pre-trained UniSpeech-SAT-Large model achieved the highest results, 61.64% and 55.57% Unweighted Accuracy (UA) for 3-class valence and arousal prediction respectively on EMOVOME, a 10% improvement over baseline models. For the emotion categories, 42.58% UA was obtained. EMOVOME performed lower than the acted RAVDESS database. The elicited IEMOCAP database also outperformed EMOVOME in predicting emotion categories, while similar results were obtained in valence and arousal. EMOVOME outcomes varied with annotator labels, showing better results and fairness when combining expert and non-expert annotations. This study highlights the gap between staged and real-life scenarios, supporting further advancements in recognizing genuine emotions.
Comment: This article is a merged version of the description of the EMOVOME database in arXiv:2402.17496v1 and the speech emotion recognition models in arXiv:2403.02167v1. This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2403.02167
رقم الأكسشن: edsarx.2403.02167
قاعدة البيانات: arXiv