VILAS: Exploring the Effects of Vision and Language Context in Automatic Speech Recognition

التفاصيل البيبلوغرافية
العنوان: VILAS: Exploring the Effects of Vision and Language Context in Automatic Speech Recognition
المؤلفون: Ni, Ziyi, Han, Minglun, Chen, Feilong, Meng, Linghui, Shi, Jing, Lv, Pin, Xu, Bo
سنة النشر: 2023
المجموعة: Computer Science
مصطلحات موضوعية: Electrical Engineering and Systems Science - Audio and Speech Processing, Computer Science - Artificial Intelligence, Computer Science - Computation and Language
الوصف: Enhancing automatic speech recognition (ASR) performance by leveraging additional multimodal information has shown promising results in previous studies. However, most of these works have primarily focused on utilizing visual cues derived from human lip motions. In fact, context-dependent visual and linguistic cues can also benefit in many scenarios. In this paper, we first propose ViLaS (Vision and Language into Automatic Speech Recognition), a novel multimodal ASR model based on the continuous integrate-and-fire (CIF) mechanism, which can integrate visual and textual context simultaneously or separately, to facilitate speech recognition. Next, we introduce an effective training strategy that improves performance in modal-incomplete test scenarios. Then, to explore the effects of integrating vision and language, we create VSDial, a multimodal ASR dataset with multimodal context cues in both Chinese and English versions. Finally, empirical results are reported on the public Flickr8K and self-constructed VSDial datasets. We explore various cross-modal fusion schemes, analyze fine-grained crossmodal alignment on VSDial, and provide insights into the effects of integrating multimodal information on speech recognition.
Comment: Accepted to ICASSP 2024
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2305.19972
رقم الأكسشن: edsarx.2305.19972
قاعدة البيانات: arXiv