Advancing the Robustness of Large Language Models through Self-Denoised Smoothing

التفاصيل البيبلوغرافية
العنوان: Advancing the Robustness of Large Language Models through Self-Denoised Smoothing
المؤلفون: Ji, Jiabao, Hou, Bairu, Zhang, Zhen, Zhang, Guanhua, Fan, Wenqi, Li, Qing, Zhang, Yang, Liu, Gaowen, Liu, Sijia, Chang, Shiyu
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Computation and Language, Computer Science - Artificial Intelligence
الوصف: Although large language models (LLMs) have achieved significant success, their vulnerability to adversarial perturbations, including recent jailbreak attacks, has raised considerable concerns. However, the increasing size of these models and their limited access make improving their robustness a challenging task. Among various defense strategies, randomized smoothing has shown great potential for LLMs, as it does not require full access to the model's parameters or fine-tuning via adversarial training. However, randomized smoothing involves adding noise to the input before model prediction, and the final model's robustness largely depends on the model's performance on these noise corrupted data. Its effectiveness is often limited by the model's sub-optimal performance on noisy data. To address this issue, we propose to leverage the multitasking nature of LLMs to first denoise the noisy inputs and then to make predictions based on these denoised versions. We call this procedure self-denoised smoothing. Unlike previous denoised smoothing techniques in computer vision, which require training a separate model to enhance the robustness of LLMs, our method offers significantly better efficiency and flexibility. Our experimental results indicate that our method surpasses existing methods in both empirical and certified robustness in defending against adversarial attacks for both downstream tasks and human alignments (i.e., jailbreak attacks). Our code is publicly available at https://github.com/UCSB-NLP-Chang/SelfDenoise
Comment: Accepted by NAACL 2024. Jiabao, Bairu, Zhen, Guanhua contributed equally. This is an updated version of the paper: arXiv:2307.07171
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2404.12274
رقم الأكسشن: edsarx.2404.12274
قاعدة البيانات: arXiv