Detecting LLM-Generated Text in Computing Education: A Comparative Study for ChatGPT Cases

التفاصيل البيبلوغرافية
العنوان: Detecting LLM-Generated Text in Computing Education: A Comparative Study for ChatGPT Cases
المؤلفون: Orenstrakh, Michael Sheinman, Karnalim, Oscar, Suarez, Carlos Anibal, Liut, Michael
سنة النشر: 2023
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Computation and Language, Computer Science - Computers and Society
الوصف: Due to the recent improvements and wide availability of Large Language Models (LLMs), they have posed a serious threat to academic integrity in education. Modern LLM-generated text detectors attempt to combat the problem by offering educators with services to assess whether some text is LLM-generated. In this work, we have collected 124 submissions from computer science students before the creation of ChatGPT. We then generated 40 ChatGPT submissions. We used this data to evaluate eight publicly-available LLM-generated text detectors through the measures of accuracy, false positives, and resilience. The purpose of this work is to inform the community of what LLM-generated text detectors work and which do not, but also to provide insights for educators to better maintain academic integrity in their courses. Our results find that CopyLeaks is the most accurate LLM-generated text detector, GPTKit is the best LLM-generated text detector to reduce false positives, and GLTR is the most resilient LLM-generated text detector. We also express concerns over 52 false positives (of 114 human written submissions) generated by GPTZero. Finally, we note that all LLM-generated text detectors are less accurate with code, other languages (aside from English), and after the use of paraphrasing tools (like QuillBot). Modern detectors are still in need of improvements so that they can offer a full-proof solution to help maintain academic integrity. Further, their usability can be improved by facilitating a smooth API integration, providing clear documentation of their features and the understandability of their model(s), and supporting more commonly used languages.
Comment: 18 pages total (16 pages, 2 reference pages). In submission
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2307.07411
رقم الأكسشن: edsarx.2307.07411
قاعدة البيانات: arXiv