Step-by-Step Unmasking for Parameter-Efficient Fine-tuning of Large Language Models

التفاصيل البيبلوغرافية
العنوان: Step-by-Step Unmasking for Parameter-Efficient Fine-tuning of Large Language Models
المؤلفون: Agarwal, Aradhye, Ramesh, Suhas K, Sengupta, Ayan, Chakraborty, Tanmoy
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Computation and Language
الوصف: Fine-tuning large language models (LLMs) on downstream tasks requires substantial computational resources. A class of parameter-efficient fine-tuning (PEFT) aims to mitigate these computational challenges by selectively fine-tuning only a small fraction of the model parameters. Although computationally efficient, these techniques often fail to match the performance of fully fine-tuned models, primarily due to inherent biases introduced during parameter selection. Traditional selective PEFT techniques use a fixed set of parameters based on a predefined budget (a process also known as unmasking), failing to capture parameter importance dynamically and often ending up exceeding the budget. We introduce $\text{ID}^3$, a novel selective PEFT method that calculates parameter importance continually and dynamically unmasks parameters by balancing exploration and exploitation in parameter selection. Our empirical study on 15 tasks spanning natural language understanding and generative tasks demonstrates the effectiveness of our method compared to fixed-masking-based PEFT techniques. We analytically show that $\text{ID}^3$ reduces the number of gradient updates by a factor of two, enhancing computational efficiency. $\text{ID}^3$ is robust to random initialization of neurons and, therefore, can be seamlessly integrated into existing additive and reparametrization-based PEFT modules such as adapters and LoRA for dynamic sparsification.
Comment: 15 pages, 7 tables, 9 figures
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2408.14470
رقم الأكسشن: edsarx.2408.14470
قاعدة البيانات: arXiv