McEval: Massively Multilingual Code Evaluation

التفاصيل البيبلوغرافية
العنوان: McEval: Massively Multilingual Code Evaluation
المؤلفون: Chai, Linzheng, Liu, Shukai, Yang, Jian, Yin, Yuwei, Jin, Ke, Liu, Jiaheng, Sun, Tao, Zhang, Ge, Ren, Changyu, Guo, Hongcheng, Wang, Zekun, Wang, Boyang, Wu, Xianjie, Wang, Bing, Li, Tongliang, Yang, Liqun, Duan, Sufeng, Li, Zhoujun
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Programming Languages
الوصف: Code large language models (LLMs) have shown remarkable advances in code understanding, completion, and generation tasks. Programming benchmarks, comprised of a selection of code challenges and corresponding test cases, serve as a standard to evaluate the capability of different LLMs in such tasks. However, most existing benchmarks primarily focus on Python and are still restricted to a limited number of languages, where other languages are translated from the Python samples (e.g. MultiPL-E) degrading the data diversity. To further facilitate the research of code LLMs, we propose a massively multilingual code benchmark covering 40 programming languages (McEval) with 16K test samples, which substantially pushes the limits of code LLMs in multilingual scenarios. The benchmark contains challenging code completion, understanding, and generation evaluation tasks with finely curated massively multilingual instruction corpora McEval-Instruct. In addition, we introduce an effective multilingual coder mCoder trained on McEval-Instruct to support multilingual programming language generation. Extensive experimental results on McEval show that there is still a difficult journey between open-source models and closed-source LLMs (e.g. GPT-series models) in numerous languages. The instruction corpora, evaluation benchmark, and leaderboard are available at \url{https://mceval.github.io/}.
Comment: 22 pages
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2406.07436
رقم الأكسشن: edsarx.2406.07436
قاعدة البيانات: arXiv