Reasoning or Simply Next Token Prediction? A Benchmark for Stress-Testing Large Language Models

التفاصيل البيبلوغرافية
العنوان: Reasoning or Simply Next Token Prediction? A Benchmark for Stress-Testing Large Language Models
المؤلفون: Wang, Wentian, Kantor, Paul, Feldman, Jacob, Gallos, Lazaros, Wang, Hao
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Computation and Language, Computer Science - Artificial Intelligence, Computer Science - Machine Learning
الوصف: We propose MMLU-SR, a novel dataset designed to measure the true comprehension abilities of Large Language Models (LLMs) by challenging their performance in question-answering tasks with modified terms. We reasoned that an agent that ``truly'' understands a concept can still evaluate it when key terms are replaced by suitably defined alternate terms, and sought to differentiate such comprehension from mere text replacement. In our study, we modified standardized test questions by replacing a key term with a dummy word along with its definition. The key term could be in the context of questions, answers, or both questions and answers. Notwithstanding the high scores achieved by recent popular LLMs on the MMLU leaderboard, we found a substantial reduction in model performance after such replacement, suggesting poor comprehension. This new benchmark provides a rigorous benchmark for testing true model comprehension, and poses a challenge to the broader scientific community.
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2406.15468
رقم الأكسشن: edsarx.2406.15468
قاعدة البيانات: arXiv