Q-Adapter: Training Your LLM Adapter as a Residual Q-Function

التفاصيل البيبلوغرافية
العنوان: Q-Adapter: Training Your LLM Adapter as a Residual Q-Function
المؤلفون: Li, Yi-Chen, Zhang, Fuxiang, Qiu, Wenjie, Yuan, Lei, Jia, Chengxing, Zhang, Zongzhang, Yu, Yang
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Machine Learning
الوصف: We consider the problem of adapting Large Language Models (LLMs) pre-trained with Reinforcement Learning from Human Feedback (RLHF) to downstream preference data. Naive approaches to achieve this could be supervised fine-tuning on preferred responses or reinforcement learning with a learned reward model. However, the LLM runs the risk of forgetting its initial knowledge as the fine-tuning progresses. To customize the LLM while preserving its existing capabilities, this paper proposes a novel method, named as Q-Adapter. We start by formalizing LLM adaptation as a problem of maximizing the linear combination of two rewards, one of which corresponds to the reward optimized by the pre-trained LLM and the other to the downstream preference data. Although both rewards are unknown, we show that this can be solved by directly learning a new module from the preference data that approximates the \emph{residual Q-function}. We consider this module to be an adapter because the original pre-trained LLM, together with it, can form the optimal customised LLM. Empirically, experiments on a range of domain-specific tasks and safety alignment tasks illustrate the superiority of Q-Adapter in both anti-forgetting and learning from new preferences.
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2407.03856
رقم الأكسشن: edsarx.2407.03856
قاعدة البيانات: arXiv