Deep Learning Based Multimodal with Two-phase Training Strategy for Daily Life Video Classification

التفاصيل البيبلوغرافية
العنوان: Deep Learning Based Multimodal with Two-phase Training Strategy for Daily Life Video Classification
المؤلفون: Pham, Lam, Le, Trang, Le, Cam, Ngo, Dat, Axel, Weissenfeld, Schindler, Alexander
سنة النشر: 2023
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Sound, Computer Science - Multimedia, Electrical Engineering and Systems Science - Audio and Speech Processing
الوصف: In this paper, we present a deep learning based multimodal system for classifying daily life videos. To train the system, we propose a two-phase training strategy. In the first training phase (Phase I), we extract the audio and visual (image) data from the original video. We then train the audio data and the visual data with independent deep learning based models. After the training processes, we obtain audio embeddings and visual embeddings by extracting feature maps from the pre-trained deep learning models. In the second training phase (Phase II), we train a fusion layer to combine the audio/visual embeddings and a dense layer to classify the combined embedding into target daily scenes. Our extensive experiments, which were conducted on the benchmark dataset of DCASE (IEEE AASP Challenge on Detection and Classification of Acoustic Scenes and Events) 2021 Task 1B Development, achieved the best classification accuracy of 80.5%, 91.8%, and 95.3% with only audio data, with only visual data, both audio and visual data, respectively. The highest classification accuracy of 95.3% presents an improvement of 17.9% compared with DCASE baseline and shows very competitive to the state-of-the-art systems.
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2305.01476
رقم الأكسشن: edsarx.2305.01476
قاعدة البيانات: arXiv