Prismer: A Vision-Language Model with Multi-Task Experts

التفاصيل البيبلوغرافية
العنوان: Prismer: A Vision-Language Model with Multi-Task Experts
المؤلفون: Liu, Shikun, Fan, Linxi, Johns, Edward, Yu, Zhiding, Xiao, Chaowei, Anandkumar, Anima
سنة النشر: 2023
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Machine Learning, Computer Science - Artificial Intelligence, Computer Science - Computer Vision and Pattern Recognition
الوصف: Recent vision-language models have shown impressive multi-modal generation capabilities. However, typically they require training huge models on massive datasets. As a more scalable alternative, we introduce Prismer, a data- and parameter-efficient vision-language model that leverages an ensemble of task-specific experts. Prismer only requires training of a small number of components, with the majority of network weights inherited from multiple readily-available, pre-trained experts, and kept frozen during training. By leveraging experts from a wide range of domains, we show Prismer can efficiently pool this expert knowledge and adapt it to various vision-language reasoning tasks. In our experiments, we show that Prismer achieves fine-tuned and few-shot learning performance which is competitive with current state-of-the-arts, whilst requiring up to two orders of magnitude less training data. Code is available at https://github.com/NVlabs/prismer.
Comment: Published at TMLR 2024. Project Page: https://shikun.io/projects/prismer Code: https://github.com/NVlabs/prismer
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2303.02506
رقم الأكسشن: edsarx.2303.02506
قاعدة البيانات: arXiv