CamemBERT-bio: Leveraging Continual Pre-training for Cost-Effective Models on French Biomedical Data

التفاصيل البيبلوغرافية
العنوان: CamemBERT-bio: Leveraging Continual Pre-training for Cost-Effective Models on French Biomedical Data
المؤلفون: Touchent, Rian, Romary, Laurent, de la Clergerie, Eric
سنة النشر: 2023
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Computation and Language, Computer Science - Artificial Intelligence
الوصف: Clinical data in hospitals are increasingly accessible for research through clinical data warehouses. However these documents are unstructured and it is therefore necessary to extract information from medical reports to conduct clinical studies. Transfer learning with BERT-like models such as CamemBERT has allowed major advances for French, especially for named entity recognition. However, these models are trained for plain language and are less efficient on biomedical data. Addressing this gap, we introduce CamemBERT-bio, a dedicated French biomedical model derived from a new public French biomedical dataset. Through continual pre-training of the original CamemBERT, CamemBERT-bio achieves an improvement of 2.54 points of F1-score on average across various biomedical named entity recognition tasks, reinforcing the potential of continual pre-training as an equally proficient yet less computationally intensive alternative to training from scratch. Additionally, we highlight the importance of using a standard evaluation protocol that provides a clear view of the current state-of-the-art for French biomedical models.
Comment: Accepted to LREC-COLING 2024
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2306.15550
رقم الأكسشن: edsarx.2306.15550
قاعدة البيانات: arXiv