Watch Out for Your Guidance on Generation! Exploring Conditional Backdoor Attacks against Large Language Models

التفاصيل البيبلوغرافية
العنوان: Watch Out for Your Guidance on Generation! Exploring Conditional Backdoor Attacks against Large Language Models
المؤلفون: He, Jiaming, Jiang, Wenbo, Hou, Guanyu, Fan, Wenshu, Zhang, Rui, Li, Hongwei
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Computation and Language, Computer Science - Cryptography and Security, Computer Science - Machine Learning
الوصف: Mainstream backdoor attacks on large language models (LLMs) typically set a fixed trigger in the input instance and specific responses for triggered queries. However, the fixed trigger setting (e.g., unusual words) may be easily detected by human detection, limiting the effectiveness and practicality in real-world scenarios. To enhance the stealthiness of backdoor activation, we present a new poisoning paradigm against LLMs triggered by specifying generation conditions, which are commonly adopted strategies by users during model inference. The poisoned model performs normally for output under normal/other generation conditions, while becomes harmful for output under target generation conditions. To achieve this objective, we introduce BrieFool, an efficient attack framework. It leverages the characteristics of generation conditions by efficient instruction sampling and poisoning data generation, thereby influencing the behavior of LLMs under target conditions. Our attack can be generally divided into two types with different targets: Safety unalignment attack and Ability degradation attack. Our extensive experiments demonstrate that BrieFool is effective across safety domains and ability domains, achieving higher success rates than baseline methods, with 94.3 % on GPT-3.5-turbo
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2404.14795
رقم الأكسشن: edsarx.2404.14795
قاعدة البيانات: arXiv