Scalable Language Model with Generalized Continual Learning

التفاصيل البيبلوغرافية
العنوان: Scalable Language Model with Generalized Continual Learning
المؤلفون: Peng, Bohao, Tian, Zhuotao, Liu, Shu, Yang, Mingchang, Jia, Jiaya
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Computation and Language
الوصف: Continual learning has gained increasing importance as it facilitates the acquisition and refinement of scalable knowledge and skills in language models. However, existing methods typically encounter strict limitations and challenges in real-world scenarios, such as reliance on experience replay, optimization constraints, and inference task-ID. In this study, we introduce the Scalable Language Model (SLM) to overcome these limitations within a more challenging and generalized setting, representing a significant advancement toward practical applications for continual learning. Specifically, we propose the Joint Adaptive Re-Parameterization (JARe), integrated with Dynamic Task-related Knowledge Retrieval (DTKR), to enable adaptive adjustment of language models based on specific downstream tasks. This approach leverages the task distribution within the vector space, aiming to achieve a smooth and effortless continual learning process. Our method demonstrates state-of-the-art performance on diverse backbones and benchmarks, achieving effective continual learning in both full-set and few-shot scenarios with minimal forgetting. Moreover, while prior research primarily focused on a single task type such as classification, our study goes beyond, with the large language model, i.e., LLaMA-2, to explore the effects across diverse domains and task types, such that a single language model can be decently scaled to broader applications.
Comment: The Twelfth International Conference on Learning Representations
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2404.07470
رقم الأكسشن: edsarx.2404.07470
قاعدة البيانات: arXiv