دورية أكاديمية
Evaluating ChatGPT as a Self-Learning Tool in Medical Biochemistry: A Performance Assessment in Undergraduate Medical University Examination
العنوان: | Evaluating ChatGPT as a Self-Learning Tool in Medical Biochemistry: A Performance Assessment in Undergraduate Medical University Examination |
---|---|
اللغة: | English |
المؤلفون: | Krishna Mohan Surapaneni (ORCID |
المصدر: | Biochemistry and Molecular Biology Education. 2024 52(2):237-248. |
الإتاحة: | Wiley. Available from: John Wiley & Sons, Inc. 111 River Street, Hoboken, NJ 07030. Tel: 800-835-6770; e-mail: cs-journals@wiley.com; Web site: https://www.wiley.com/en-us |
Peer Reviewed: | Y |
Page Count: | 12 |
تاريخ النشر: | 2024 |
نوع الوثيقة: | Journal Articles Reports - Research |
Education Level: | Higher Education Postsecondary Education |
Descriptors: | Biochemistry, Science Instruction, Artificial Intelligence, Teaching Methods, Undergraduate Students, Medical Education, Independent Study, Correlation, Evaluators, Science Tests, Item Analysis, Test Items, Accuracy, Scoring |
DOI: | 10.1002/bmb.21808 |
تدمد: | 1470-8175 1539-3429 |
مستخلص: | The emergence of ChatGPT as one of the most advanced chatbots and its ability to generate diverse data has given room for numerous discussions worldwide regarding its utility, particularly in advancing medical education and research. This study seeks to assess the performance of ChatGPT in medical biochemistry to evaluate its potential as an effective self-learning tool for medical students. This evaluation was carried out using the university examination question papers of both parts 1 and 2 of medical biochemistry which comprised theory and multiple choice questions (MCQs) accounting for a total of 100 in each part. The questions were used to interact with ChatGPT, and three raters independently reviewed and scored the answers to prevent bias in scoring. We conducted the inter-item correlation matrix and the interclass correlation between raters 1, 2, and 3. For MCQs, symmetric measures in the form of kappa value (a measure of agreement) were performed between raters 1, 2, and 3. ChatGPT generated relevant and appropriate answers to all questions along with explanations for MCQs. ChatGPT has "passed" the medical biochemistry university examination with an average score of 117 out of 200 (58%) in both papers. In Paper 1, ChatGPT has secured 60 ± 2.29 and 57 ± 4.36 in Paper 2. The kappa value for all the cross-analysis of Rater 1, Rater 2, and Rater 3 scores in MCQ was 1.000. The evaluation of ChatGPT as a self-learning tool in medical biochemistry has yielded important insights. While it is encouraging that ChatGPT has demonstrated proficiency in this area, the overall score of 58% indicates that there is work to be done. To unlock its full potential as a self-learning tool, ChatGPT must focus on generating not only accurate but also comprehensive and contextually relevant content. |
Abstractor: | As Provided |
Entry Date: | 2024 |
رقم الأكسشن: | EJ1419153 |
قاعدة البيانات: | ERIC |
تدمد: | 1470-8175 1539-3429 |
---|---|
DOI: | 10.1002/bmb.21808 |