Knowledge Fusion of Large Language Models

التفاصيل البيبلوغرافية
العنوان: Knowledge Fusion of Large Language Models
المؤلفون: Wan, Fanqi, Huang, Xinting, Cai, Deng, Quan, Xiaojun, Bi, Wei, Shi, Shuming
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Computation and Language
الوصف: While training large language models (LLMs) from scratch can generate models with distinct functionalities and strengths, it comes at significant costs and may result in redundant capabilities. Alternatively, a cost-effective and compelling approach is to merge existing pre-trained LLMs into a more potent model. However, due to the varying architectures of these LLMs, directly blending their weights is impractical. In this paper, we introduce the notion of knowledge fusion for LLMs, aimed at combining the capabilities of existing LLMs and transferring them into a single LLM. By leveraging the generative distributions of source LLMs, we externalize their collective knowledge and unique strengths, thereby potentially elevating the capabilities of the target model beyond those of any individual source LLM. We validate our approach using three popular LLMs with different architectures--Llama-2, MPT, and OpenLLaMA--across various benchmarks and tasks. Our findings confirm that the fusion of LLMs can improve the performance of the target model across a range of capabilities such as reasoning, commonsense, and code generation. Our code, model weights, and data are public at \url{https://github.com/fanqiwan/FuseLLM}.
Comment: Accepted to ICLR 2024
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2401.10491
رقم الأكسشن: edsarx.2401.10491
قاعدة البيانات: arXiv