Revisiting Backdoor Attacks against Large Vision-Language Models

التفاصيل البيبلوغرافية
العنوان: Revisiting Backdoor Attacks against Large Vision-Language Models
المؤلفون: Liang, Siyuan, Liang, Jiawei, Pang, Tianyu, Du, Chao, Liu, Aishan, Chang, Ee-Chien, Cao, Xiaochun
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Computer Vision and Pattern Recognition
الوصف: Instruction tuning enhances large vision-language models (LVLMs) but raises security risks through potential backdoor attacks due to their openness. Previous backdoor studies focus on enclosed scenarios with consistent training and testing instructions, neglecting the practical domain gaps that could affect attack effectiveness. This paper empirically examines the generalizability of backdoor attacks during the instruction tuning of LVLMs for the first time, revealing certain limitations of most backdoor strategies in practical scenarios. We quantitatively evaluate the generalizability of six typical backdoor attacks on image caption benchmarks across multiple LVLMs, considering both visual and textual domain offsets. Our findings indicate that attack generalizability is positively correlated with the backdoor trigger's irrelevance to specific images/models and the preferential correlation of the trigger pattern. Additionally, we modify existing backdoor attacks based on the above key observations, demonstrating significant improvements in cross-domain scenario generalizability (+86% attack success rate). Notably, even without access to the instruction datasets, a multimodal instruction set can be successfully poisoned with a very low poisoning rate (0.2%), achieving an attack success rate of over 97%. This paper underscores that even simple traditional backdoor strategies pose a serious threat to LVLMs, necessitating more attention and in-depth research.
Comment: 24 pages, 8 figures
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2406.18844
رقم الأكسشن: edsarx.2406.18844
قاعدة البيانات: arXiv