Evaluation of the Programming Skills of Large Language Models

التفاصيل البيبلوغرافية
العنوان: Evaluation of the Programming Skills of Large Language Models
المؤلفون: Heitz, Luc Bryan, Chamas, Joun, Scherb, Christopher
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Software Engineering, Computer Science - Computation and Language, Computer Science - Cryptography and Security
الوصف: The advent of Large Language Models (LLM) has revolutionized the efficiency and speed with which tasks are completed, marking a significant leap in productivity through technological innovation. As these chatbots tackle increasingly complex tasks, the challenge of assessing the quality of their outputs has become paramount. This paper critically examines the output quality of two leading LLMs, OpenAI's ChatGPT and Google's Gemini AI, by comparing the quality of programming code generated in both their free versions. Through the lens of a real-world example coupled with a systematic dataset, we investigate the code quality produced by these LLMs. Given their notable proficiency in code generation, this aspect of chatbot capability presents a particularly compelling area for analysis. Furthermore, the complexity of programming code often escalates to levels where its verification becomes a formidable task, underscoring the importance of our study. This research aims to shed light on the efficacy and reliability of LLMs in generating high-quality programming code, an endeavor that has significant implications for the field of software development and beyond.
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2405.14388
رقم الأكسشن: edsarx.2405.14388
قاعدة البيانات: arXiv