E5-V: Universal Embeddings with Multimodal Large Language Models

التفاصيل البيبلوغرافية
العنوان: E5-V: Universal Embeddings with Multimodal Large Language Models
المؤلفون: Jiang, Ting, Song, Minghui, Zhang, Zihan, Huang, Haizhen, Deng, Weiwei, Sun, Feng, Zhang, Qi, Wang, Deqing, Zhuang, Fuzhen
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Computation and Language, Computer Science - Computer Vision and Pattern Recognition, Computer Science - Information Retrieval
الوصف: Multimodal large language models (MLLMs) have shown promising advancements in general visual and language understanding. However, the representation of multimodal information using MLLMs remains largely unexplored. In this work, we introduce a new framework, E5-V, designed to adapt MLLMs for achieving universal multimodal embeddings. Our findings highlight the significant potential of MLLMs in representing multimodal inputs compared to previous approaches. By leveraging MLLMs with prompts, E5-V effectively bridges the modality gap between different types of inputs, demonstrating strong performance in multimodal embeddings even without fine-tuning. We propose a single modality training approach for E5-V, where the model is trained exclusively on text pairs. This method demonstrates significant improvements over traditional multimodal training on image-text pairs, while reducing training costs by approximately 95%. Additionally, this approach eliminates the need for costly multimodal training data collection. Extensive experiments across four types of tasks demonstrate the effectiveness of E5-V. As a universal multimodal model, E5-V not only achieves but often surpasses state-of-the-art performance in each task, despite being trained on a single modality.
Comment: Code and models are available at https://github.com/kongds/E5-V
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2407.12580
رقم الأكسشن: edsarx.2407.12580
قاعدة البيانات: arXiv