Knowledge-driven Subspace Fusion and Gradient Coordination for Multi-modal Learning

التفاصيل البيبلوغرافية
العنوان: Knowledge-driven Subspace Fusion and Gradient Coordination for Multi-modal Learning
المؤلفون: Zhang, Yupei, Wang, Xiaofei, Meng, Fangliangzi, Tang, Jin, Li, Chao
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Electrical Engineering and Systems Science - Image and Video Processing, Computer Science - Computer Vision and Pattern Recognition, Computer Science - Machine Learning
الوصف: Multi-modal learning plays a crucial role in cancer diagnosis and prognosis. Current deep learning based multi-modal approaches are often limited by their abilities to model the complex correlations between genomics and histology data, addressing the intrinsic complexity of tumour ecosystem where both tumour and microenvironment contribute to malignancy. We propose a biologically interpretative and robust multi-modal learning framework to efficiently integrate histology images and genomics by decomposing the feature subspace of histology images and genomics, reflecting distinct tumour and microenvironment features. To enhance cross-modal interactions, we design a knowledge-driven subspace fusion scheme, consisting of a cross-modal deformable attention module and a gene-guided consistency strategy. Additionally, in pursuit of dynamically optimizing the subspace knowledge, we further propose a novel gradient coordination learning strategy. Extensive experiments demonstrate the effectiveness of the proposed method, outperforming state-of-the-art techniques in three downstream tasks of glioma diagnosis, tumour grading, and survival analysis. Our code is available at https://github.com/helenypzhang/Subspace-Multimodal-Learning.
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2406.13979
رقم الأكسشن: edsarx.2406.13979
قاعدة البيانات: arXiv