Low-Redundant Optimization for Large Language Model Alignment

التفاصيل البيبلوغرافية
العنوان: Low-Redundant Optimization for Large Language Model Alignment
المؤلفون: Chen, Zhipeng, Zhou, Kun, Zhao, Wayne Xin, Wang, Jingyuan, Wen, Ji-Rong
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Computation and Language
الوصف: Large language models (LLMs) are still struggling in aligning with human preference in complex tasks and scenarios. They are prone to overfit into the unexpected patterns or superficial styles in the training data. We conduct an empirical study that only selects the top-10\% most updated parameters in LLMs for alignment training, and see improvements in the convergence process and final performance. It indicates the existence of redundant neurons in LLMs for alignment training. To reduce its influence, we propose a low-redundant alignment method named \textbf{ALLO}, focusing on optimizing the most related neurons with the most useful supervised signals. Concretely, we first identify the neurons that are related to the human preference data by a gradient-based strategy, then identify the alignment-related key tokens by reward models for computing loss. Besides, we also decompose the alignment process into the forgetting and learning stages, where we first forget the tokens with unaligned knowledge and then learn aligned knowledge, by updating different ratios of neurons, respectively. Experimental results on 10 datasets have shown the effectiveness of ALLO. Our code and data are available at \url{https://github.com/RUCAIBox/ALLO}.
Comment: 14 pages, working in progress
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2406.12606
رقم الأكسشن: edsarx.2406.12606
قاعدة البيانات: arXiv