A Transformer-based representation-learning model with unified processing of multimodal input for clinical diagnostics

التفاصيل البيبلوغرافية
العنوان: A Transformer-based representation-learning model with unified processing of multimodal input for clinical diagnostics
المؤلفون: Zhou, Hong-Yu, Yu, Yizhou, Wang, Chengdi, Zhang, Shu, Gao, Yuanxu, Pan, Jia, Shao, Jun, Lu, Guangming, Zhang, Kang, Li, Weimin
سنة النشر: 2023
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Computer Vision and Pattern Recognition, Computer Science - Computation and Language, Computer Science - Machine Learning
الوصف: During the diagnostic process, clinicians leverage multimodal information, such as chief complaints, medical images, and laboratory-test results. Deep-learning models for aiding diagnosis have yet to meet this requirement. Here we report a Transformer-based representation-learning model as a clinical diagnostic aid that processes multimodal input in a unified manner. Rather than learning modality-specific features, the model uses embedding layers to convert images and unstructured and structured text into visual tokens and text tokens, and bidirectional blocks with intramodal and intermodal attention to learn a holistic representation of radiographs, the unstructured chief complaint and clinical history, structured clinical information such as laboratory-test results and patient demographic information. The unified model outperformed an image-only model and non-unified multimodal diagnosis models in the identification of pulmonary diseases (by 12% and 9%, respectively) and in the prediction of adverse clinical outcomes in patients with COVID-19 (by 29% and 7%, respectively). Leveraging unified multimodal Transformer-based models may help streamline triage of patients and facilitate the clinical decision process.
Comment: Accepted by Nature Biomedical Engineering
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2306.00864
رقم الأكسشن: edsarx.2306.00864
قاعدة البيانات: arXiv