A Continued Pretrained LLM Approach for Automatic Medical Note Generation

التفاصيل البيبلوغرافية
العنوان: A Continued Pretrained LLM Approach for Automatic Medical Note Generation
المؤلفون: Yuan, Dong, Rastogi, Eti, Naik, Gautam, Rajagopal, Sree Prasanna, Goyal, Sagar, Zhao, Fen, Chintagunta, Bharath, Ward, Jeff
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Computation and Language, Computer Science - Artificial Intelligence
الوصف: LLMs are revolutionizing NLP tasks. However, the use of the most advanced LLMs, such as GPT-4, is often prohibitively expensive for most specialized fields. We introduce HEAL, the first continuously trained 13B LLaMA2-based LLM that is purpose-built for medical conversations and measured on automated scribing. Our results demonstrate that HEAL outperforms GPT-4 and PMC-LLaMA in PubMedQA, with an accuracy of 78.4\%. It also achieves parity with GPT-4 in generating medical notes. Remarkably, HEAL surpasses GPT-4 and Med-PaLM 2 in identifying more correct medical concepts and exceeds the performance of human scribes and other comparable models in correctness and completeness.
Comment: Accepted to NAACL 2024
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2403.09057
رقم الأكسشن: edsarx.2403.09057
قاعدة البيانات: arXiv