Purifying Large Language Models by Ensembling a Small Language Model

التفاصيل البيبلوغرافية
العنوان: Purifying Large Language Models by Ensembling a Small Language Model
المؤلفون: Li, Tianlin, Liu, Qian, Pang, Tianyu, Du, Chao, Guo, Qing, Liu, Yang, Lin, Min
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Computation and Language, Computer Science - Artificial Intelligence, Computer Science - Machine Learning, I.2
الوصف: The emerging success of large language models (LLMs) heavily relies on collecting abundant training data from external (untrusted) sources. Despite substantial efforts devoted to data cleaning and curation, well-constructed LLMs have been reported to suffer from copyright infringement, data poisoning, and/or privacy violations, which would impede practical deployment of LLMs. In this study, we propose a simple and easily implementable method for purifying LLMs from the negative effects caused by uncurated data, namely, through ensembling LLMs with benign and small language models (SLMs). Aside from theoretical guarantees, we perform comprehensive experiments to empirically confirm the efficacy of ensembling LLMs with SLMs, which can effectively preserve the performance of LLMs while mitigating issues such as copyright infringement, data poisoning, and privacy violations.
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2402.14845
رقم الأكسشن: edsarx.2402.14845
قاعدة البيانات: arXiv