WTU-EVAL: A Whether-or-Not Tool Usage Evaluation Benchmark for Large Language Models

التفاصيل البيبلوغرافية
العنوان: WTU-EVAL: A Whether-or-Not Tool Usage Evaluation Benchmark for Large Language Models
المؤلفون: Ning, Kangyun, Su, Yisong, Lv, Xueqiang, Zhang, Yuanzhe, Liu, Jian, Liu, Kang, Xu, Jinan
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Computation and Language, Computer Science - Artificial Intelligence
الوصف: Although Large Language Models (LLMs) excel in NLP tasks, they still need external tools to extend their ability. Current research on tool learning with LLMs often assumes mandatory tool use, which does not always align with real-world situations, where the necessity for tools is uncertain, and incorrect or unnecessary use of tools can damage the general abilities of LLMs. Therefore, we propose to explore whether LLMs can discern their ability boundaries and use tools flexibly. We then introduce the Whether-or-not tool usage Evaluation benchmark (WTU-Eval) to assess LLMs with eleven datasets, where six of them are tool-usage datasets, and five are general datasets. LLMs are prompted to use tools according to their needs. The results of eight LLMs on WTU-Eval reveal that LLMs frequently struggle to determine tool use in general datasets, and LLMs' performance in tool-usage datasets improves when their ability is similar to ChatGPT. In both datasets, incorrect tool usage significantly impairs LLMs' performance. To mitigate this, we also develop the finetuning dataset to enhance tool decision-making. Fine-tuning Llama2-7B results in a 14\% average performance improvement and a 16.8\% decrease in incorrect tool usage. We will release the WTU-Eval benchmark.
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2407.12823
رقم الأكسشن: edsarx.2407.12823
قاعدة البيانات: arXiv