KoLA: Carefully Benchmarking World Knowledge of Large Language Models

التفاصيل البيبلوغرافية
العنوان: KoLA: Carefully Benchmarking World Knowledge of Large Language Models
المؤلفون: Yu, Jifan, Wang, Xiaozhi, Tu, Shangqing, Cao, Shulin, Zhang-Li, Daniel, Lv, Xin, Peng, Hao, Yao, Zijun, Zhang, Xiaohan, Li, Hanming, Li, Chunyang, Zhang, Zheyuan, Bai, Yushi, Liu, Yantao, Xin, Amy, Lin, Nianyi, Yun, Kaifeng, Gong, Linlu, Chen, Jianhui, Wu, Zhili, Qi, Yunjia, Li, Weikai, Guan, Yong, Zeng, Kaisheng, Qi, Ji, Jin, Hailong, Liu, Jinxin, Gu, Yu, Yao, Yuan, Ding, Ning, Hou, Lei, Liu, Zhiyuan, Xu, Bin, Tang, Jie, Li, Juanzi
سنة النشر: 2023
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Computation and Language
الوصف: The unprecedented performance of large language models (LLMs) necessitates improvements in evaluations. Rather than merely exploring the breadth of LLM abilities, we believe meticulous and thoughtful designs are essential to thorough, unbiased, and applicable evaluations. Given the importance of world knowledge to LLMs, we construct a Knowledge-oriented LLM Assessment benchmark (KoLA), in which we carefully design three crucial factors: (1) For \textbf{ability modeling}, we mimic human cognition to form a four-level taxonomy of knowledge-related abilities, covering $19$ tasks. (2) For \textbf{data}, to ensure fair comparisons, we use both Wikipedia, a corpus prevalently pre-trained by LLMs, along with continuously collected emerging corpora, aiming to evaluate the capacity to handle unseen data and evolving knowledge. (3) For \textbf{evaluation criteria}, we adopt a contrastive system, including overall standard scores for better numerical comparability across tasks and models and a unique self-contrast metric for automatically evaluating knowledge-creating ability. We evaluate $28$ open-source and commercial LLMs and obtain some intriguing findings. The KoLA dataset and open-participation leaderboard are publicly released at https://kola.xlore.cn and will be continuously updated to provide references for developing LLMs and knowledge-related systems.
Comment: Accepted by ICLR 2024
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2306.09296
رقم الأكسشن: edsarx.2306.09296
قاعدة البيانات: arXiv