Parrot Captions Teach CLIP to Spot Text

التفاصيل البيبلوغرافية
العنوان: Parrot Captions Teach CLIP to Spot Text
المؤلفون: Lin, Yiqi, He, Conghui, Wang, Alex Jinpeng, Wang, Bin, Li, Weijia, Shou, Mike Zheng
سنة النشر: 2023
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Computer Vision and Pattern Recognition, Computer Science - Artificial Intelligence
الوصف: Despite CLIP being the foundation model in numerous vision-language applications, the CLIP suffers from a severe text spotting bias. Such bias causes CLIP models to `Parrot' the visual text embedded within images while disregarding the authentic visual semantics. We uncover that in the most popular image-text dataset LAION-2B, the captions also densely parrot (spell) the text embedded in images. Our analysis shows that around 50% of images are embedded with visual text content, and around 30% of captions words are in these embedded visual content. Based on such observation, we thoroughly inspect the different released versions of CLIP models and verify that the visual text is the dominant factor in measuring the LAION-style image-text similarity for these models. To examine whether these parrot captions shape the text spotting bias, we train a series of CLIP models with LAION subsets curated by different parrot-caption-oriented criteria. We show that training with parrot captions easily shapes such bias but harms the expected visual-language representation learning in CLIP models. This suggests that it is urgent to revisit either the design of CLIP-like models or the existing image-text dataset curation pipeline built on CLIP score filtering.
Comment: project page: https://linyq17.github.io/CLIP-Parrot-Bias/. Add more analysis and ablation studies. Update Figure 3 with a more precise metric
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2312.14232
رقم الأكسشن: edsarx.2312.14232
قاعدة البيانات: arXiv