A Peek into Token Bias: Large Language Models Are Not Yet Genuine Reasoners

التفاصيل البيبلوغرافية
العنوان: A Peek into Token Bias: Large Language Models Are Not Yet Genuine Reasoners
المؤلفون: Jiang, Bowen, Xie, Yangxinyu, Hao, Zhuoqun, Wang, Xiaomeng, Mallick, Tanwi, Su, Weijie J., Taylor, Camillo J., Roth, Dan
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Computation and Language, Computer Science - Artificial Intelligence
الوصف: This study introduces a hypothesis-testing framework to assess whether large language models (LLMs) possess genuine reasoning abilities or primarily depend on token bias. We go beyond evaluating LLMs on accuracy; rather, we aim to investigate their token bias in solving logical reasoning tasks. Specifically, we develop carefully controlled synthetic datasets, featuring conjunction fallacy and syllogistic problems. Our framework outlines a list of hypotheses where token biases are readily identifiable, with all null hypotheses assuming genuine reasoning capabilities of LLMs. The findings in this study suggest, with statistical guarantee, that most LLMs still struggle with logical reasoning. While they may perform well on classic problems, their success largely depends on recognizing superficial patterns with strong token bias, thereby raising concerns about their actual reasoning and generalization abilities.
Comment: Codes are open-sourced at https://github.com/bowen-upenn/llm_token_bias
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2406.11050
رقم الأكسشن: edsarx.2406.11050
قاعدة البيانات: arXiv