How Much are Large Language Models Contaminated? A Comprehensive Survey and the LLMSanitize Library

التفاصيل البيبلوغرافية
العنوان: How Much are Large Language Models Contaminated? A Comprehensive Survey and the LLMSanitize Library
المؤلفون: Ravaut, Mathieu, Ding, Bosheng, Jiao, Fangkai, Chen, Hailin, Li, Xingxuan, Zhao, Ruochen, Qin, Chengwei, Xiong, Caiming, Joty, Shafiq
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Computation and Language
الوصف: With the rise of Large Language Models (LLMs) in recent years, abundant new opportunities are emerging, but also new challenges, among which contamination is quickly becoming critical. Business applications and fundraising in AI have reached a scale at which a few percentage points gained on popular question-answering benchmarks could translate into dozens of millions of dollars, placing high pressure on model integrity. At the same time, it is becoming harder and harder to keep track of the data that LLMs have seen; if not impossible with closed-source models like GPT-4 and Claude-3 not divulging any information on the training set. As a result, contamination becomes a major issue: LLMs' performance may not be reliable anymore, as the high performance may be at least partly due to their previous exposure to the data. This limitation jeopardizes the entire progress in the field of NLP, yet, there remains a lack of methods on how to efficiently detect contamination.In this paper, we survey all recent work on contamination detection with LLMs, and help the community track contamination levels of LLMs by releasing an open-source Python library named LLMSanitize implementing major contamination detection algorithms.
Comment: 8 pages, 1 figure, 1 table
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2404.00699
رقم الأكسشن: edsarx.2404.00699
قاعدة البيانات: arXiv