CyCLIP: Cyclic Contrastive Language-Image Pretraining

التفاصيل البيبلوغرافية
العنوان: CyCLIP: Cyclic Contrastive Language-Image Pretraining
المؤلفون: Goel, Shashank, Bansal, Hritik, Bhatia, Sumit, Rossi, Ryan A., Vinay, Vishwa, Grover, Aditya
سنة النشر: 2022
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Computer Vision and Pattern Recognition, Computer Science - Machine Learning
الوصف: Recent advances in contrastive representation learning over paired image-text data have led to models such as CLIP that achieve state-of-the-art performance for zero-shot classification and distributional robustness. Such models typically require joint reasoning in the image and text representation spaces for downstream inference tasks. Contrary to prior beliefs, we demonstrate that the image and text representations learned via a standard contrastive objective are not interchangeable and can lead to inconsistent downstream predictions. To mitigate this issue, we formalize consistency and propose CyCLIP, a framework for contrastive representation learning that explicitly optimizes for the learned representations to be geometrically consistent in the image and text space. In particular, we show that consistent representations can be learned by explicitly symmetrizing (a) the similarity between the two mismatched image-text pairs (cross-modal consistency); and (b) the similarity between the image-image pair and the text-text pair (in-modal consistency). Empirically, we show that the improved consistency in CyCLIP translates to significant gains over CLIP, with gains ranging from 10%-24% for zero-shot classification accuracy on standard benchmarks (CIFAR-10, CIFAR-100, ImageNet1K) and 10%-27% for robustness to various natural distribution shifts. The code is available at https://github.com/goel-shashank/CyCLIP.
Comment: 19 pages, 13 tables, 6 figures, Oral at NeuRIPS 2022
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2205.14459
رقم الأكسشن: edsarx.2205.14459
قاعدة البيانات: arXiv