A Survey of Large Language Models for Code: Evolution, Benchmarking, and Future Trends

التفاصيل البيبلوغرافية
العنوان: A Survey of Large Language Models for Code: Evolution, Benchmarking, and Future Trends
المؤلفون: Zheng, Zibin, Ning, Kaiwen, Wang, Yanlin, Zhang, Jingwen, Zheng, Dewu, Ye, Mingxi, Chen, Jiachi
سنة النشر: 2023
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Software Engineering
الوصف: General large language models (LLMs), represented by ChatGPT, have demonstrated significant potential in tasks such as code generation in software engineering. This has led to the development of specialized LLMs for software engineering, known as Code LLMs. A considerable portion of Code LLMs is derived from general LLMs through model fine-tuning. As a result, Code LLMs are often updated frequently and their performance can be influenced by the base LLMs. However, there is currently a lack of systematic investigation into Code LLMs and their performance. In this study, we conduct a comprehensive survey and analysis of the types of Code LLMs and their differences in performance compared to general LLMs. We aim to address three questions: (1) What LLMs are specifically designed for software engineering tasks, and what is the relationship between these Code LLMs? (2) Do Code LLMs really outperform general LLMs in software engineering tasks? (3) Which LLMs are more proficient in different software engineering tasks? To answer these questions, we first collect relevant literature and work from five major databases and open-source communities, resulting in 134 works for analysis. Next, we categorize the Code LLMs based on their publishers and examine their relationships with general LLMs and among themselves. Furthermore, we investigate the performance differences between general LLMs and Code LLMs in various software engineering tasks to demonstrate the impact of base models and Code LLMs. Finally, we comprehensively maintained the performance of LLMs across multiple mainstream benchmarks to identify the best-performing LLMs for each software engineering task. Our research not only assists developers of Code LLMs in choosing base models for the development of more advanced LLMs but also provides insights for practitioners to better understand key improvement directions for Code LLMs.
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2311.10372
رقم الأكسشن: edsarx.2311.10372
قاعدة البيانات: arXiv