Insights into LLM Long-Context Failures: When Transformers Know but Don't Tell

التفاصيل البيبلوغرافية
العنوان: Insights into LLM Long-Context Failures: When Transformers Know but Don't Tell
المؤلفون: Lu, Taiming, Gao, Muhan, Yu, Kuai, Byerly, Adam, Khashabi, Daniel
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Computation and Language
الوصف: Large Language Models (LLMs) exhibit positional bias, struggling to utilize information from the middle or end of long contexts. Our study explores LLMs' long-context reasoning by probing their hidden representations. We find that while LLMs encode the position of target information, they often fail to leverage this in generating accurate responses. This reveals a disconnect between information retrieval and utilization, a "know but don't tell" phenomenon. We further analyze the relationship between extraction time and final accuracy, offering insights into the underlying mechanics of transformer models.
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2406.14673
رقم الأكسشن: edsarx.2406.14673
قاعدة البيانات: arXiv