Mitigating Social Biases in Language Models through Unlearning

التفاصيل البيبلوغرافية
العنوان: Mitigating Social Biases in Language Models through Unlearning
المؤلفون: Dige, Omkar, Singh, Diljot, Yau, Tsz Fung, Zhang, Qixuan, Bolandraftar, Borna, Zhu, Xiaodan, Khattak, Faiza Khan
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Computation and Language, Computer Science - Artificial Intelligence
الوصف: Mitigating bias in language models (LMs) has become a critical problem due to the widespread deployment of LMs. Numerous approaches revolve around data pre-processing and fine-tuning of language models, tasks that can be both time-consuming and computationally demanding. Consequently, there is a growing interest in machine unlearning techniques given their capacity to induce the forgetting of undesired behaviors of the existing pre-trained or fine-tuned models with lower computational cost. In this work, we explore two unlearning methods, (1) Partitioned Contrastive Gradient Unlearning (PCGU) applied on decoder models and (2) Negation via Task Vector, to reduce social biases in state-of-the-art and open-source LMs such as LLaMA-2 and OPT. We also implement distributed PCGU for large models. It is empirically shown, through quantitative and qualitative analyses, that negation via Task Vector method outperforms PCGU in debiasing with minimum deterioration in performance and perplexity of the models. On LLaMA-27B, negation via Task Vector reduces the bias score by 11.8%
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2406.13551
رقم الأكسشن: edsarx.2406.13551
قاعدة البيانات: arXiv