دورية أكاديمية

Patient-Friendly Discharge Summaries in Korea Based on ChatGPT: Software Development and Validation.

التفاصيل البيبلوغرافية
العنوان: Patient-Friendly Discharge Summaries in Korea Based on ChatGPT: Software Development and Validation.
المؤلفون: Kim H; College of Nursing, Yonsei University, Seoul, Korea., Jin HM; Department of Biomedical Systems Informatics, Yonsei University College of Medicine, Seoul, Korea., Jung YB; Department of Surgery, Yonsei University College of Medicine, Seoul, Korea., You SC; Department of Biomedical Systems Informatics, Yonsei University College of Medicine, Seoul, Korea. Chandryou@yuhs.ac.
المصدر: Journal of Korean medical science [J Korean Med Sci] 2024 Apr 29; Vol. 39 (16), pp. e148. Date of Electronic Publication: 2024 Apr 29.
نوع المنشور: Journal Article
اللغة: English
بيانات الدورية: Publisher: Korean Academy of Medical Science Country of Publication: Korea (South) NLM ID: 8703518 Publication Model: Electronic Cited Medium: Internet ISSN: 1598-6357 (Electronic) Linking ISSN: 10118934 NLM ISO Abbreviation: J Korean Med Sci Subsets: MEDLINE
أسماء مطبوعة: Original Publication: Seoul, Korea : Korean Academy of Medical Science, [1986-
مواضيع طبية MeSH: Software* , Patient Discharge*, Humans ; Republic of Korea ; Myocardial Infarction/diagnosis ; Patient Satisfaction ; Patient Discharge Summaries ; Electronic Health Records
مستخلص: Background: Although discharge summaries in patient-friendly language can enhance patient comprehension and satisfaction, they can also increase medical staff workload. Using a large language model, we developed and validated software that generates a patient-friendly discharge summary.
Methods: We developed and tested the software using 100 discharge summary documents, 50 for patients with myocardial infarction and 50 for patients treated in the Department of General Surgery. For each document, three new summaries were generated using three different prompting methods (Zero-shot, One-shot, and Few-shot) and graded using a 5-point Likert Scale regarding factuality, comprehensiveness, usability, ease, and fluency. We compared the effects of different prompting methods and assessed the relationship between input length and output quality.
Results: The mean overall scores differed across prompting methods (4.19 ± 0.36 in Few-shot, 4.11 ± 0.36 in One-shot, and 3.73 ± 0.44 in Zero-shot; P < 0.001). Post-hoc analysis indicated that the scores were higher with Few-shot and One-shot prompts than in zero-shot prompts, whereas there was no significant difference between Few-shot and One-shot prompts. The overall proportion of outputs that scored ≥ 4 was 77.0% (95% confidence interval: 68.8-85.3%), 70.0% (95% confidence interval [CI], 61.0-79.0%), and 32.0% (95% CI, 22.9-41.1%) with Few-shot, One-shot, and Zero-shot prompts, respectively. The mean factuality score was 4.19 ± 0.60 with Few-shot, 4.20 ± 0.55 with One-shot, and 3.82 ± 0.57 with Zero-shot prompts. Input length and the overall score showed negative correlations in the Zero-shot ( r = -0.437, P < 0.001) and One-shot ( r = -0.327, P < 0.001) tests but not in the Few-shot ( r = -0.050, P = 0.625) tests.
Conclusion: Large-language models utilizing Few-shot prompts generally produce acceptable discharge summaries without significant misinformation. Our research highlights the potential of such models in creating patient-friendly discharge summaries for Korean patients to support patient-centered care.
Competing Interests: You SC reports being a chief technology officer of the PHI Digital Healthcare. Other authors have no potential conflicts of interest to disclose.
(© 2024 The Korean Academy of Medical Sciences.)
References: J Mov Disord. 2023 May;16(2):158-162. (PMID: 37258279)
BMJ Open Qual. 2022 Aug;11(3):. (PMID: 35998981)
Intern Med J. 2014 Sep;44(9):851-7. (PMID: 24863954)
Int J Qual Health Care. 2017 Oct 01;29(6):752-768. (PMID: 29025093)
JAMA Intern Med. 2023 Jun 1;183(6):589-596. (PMID: 37115527)
Ann Fam Med. 2011 Mar-Apr;9(2):100-3. (PMID: 21403134)
J Med Internet Res. 2020 Jul 15;22(7):e19274. (PMID: 32673234)
Sci Rep. 2023 Aug 30;13(1):14215. (PMID: 37648742)
Lancet Digit Health. 2023 Mar;5(3):e107-e108. (PMID: 36754724)
Int J Med Inform. 2017 May;101:100-107. (PMID: 28347440)
JAMA Intern Med. 2023 Sep 1;183(9):1026-1027. (PMID: 37459091)
PLOS Digit Health. 2023 Feb 9;2(2):e0000198. (PMID: 36812645)
Nat Med. 2023 Aug;29(8):1930-1940. (PMID: 37460753)
JAMA Intern Med. 2022 May 1;182(5):564-566. (PMID: 35344006)
J Korean Med Sci. 2023 Jul 03;38(26):e207. (PMID: 37401498)
NPJ Digit Med. 2024 Feb 19;7(1):40. (PMID: 38374445)
NPJ Digit Med. 2023 Aug 24;6(1):158. (PMID: 37620423)
Soc Sci Med. 2000 Oct;51(7):1087-110. (PMID: 11005395)
Radiology. 2023 Jul;308(1):e230970. (PMID: 37489981)
Nat Med. 2023 Jun;29(6):1296-1297. (PMID: 37169865)
JAMA. 2023 Mar 14;329(10):842-844. (PMID: 36735264)
J Patient Saf. 2021 Oct 1;17(7):e637-e644. (PMID: 28885382)
Nature. 2023 Aug;620(7972):172-180. (PMID: 37438534)
معلومات مُعتمدة: 6-2023-0067 Korea YUCM Yonsei University College of Medicine
فهرسة مساهمة: Keywords: Artificial Intelligence; ChatGPT; Documentation; Large Language Model; Patient Discharge Summaries; Patient-Centered Care
تواريخ الأحداث: Date Created: 20240430 Date Completed: 20240430 Latest Revision: 20240502
رمز التحديث: 20240502
مُعرف محوري في PubMed: PMC11058343
DOI: 10.3346/jkms.2024.39.e148
PMID: 38685890
قاعدة البيانات: MEDLINE
الوصف
تدمد:1598-6357
DOI:10.3346/jkms.2024.39.e148