Learning to Adapt Foundation Model DINOv2 for Capsule Endoscopy Diagnosis

التفاصيل البيبلوغرافية
العنوان: Learning to Adapt Foundation Model DINOv2 for Capsule Endoscopy Diagnosis
المؤلفون: Zhang, Bowen, Chen, Ying, Bai, Long, Zhao, Yan, Sun, Yuxiang, Yuan, Yixuan, Zhang, Jianhua, Ren, Hongliang
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Computer Vision and Pattern Recognition
الوصف: Foundation models have become prominent in computer vision, achieving notable success in various tasks. However, their effectiveness largely depends on pre-training with extensive datasets. Applying foundation models directly to small datasets of capsule endoscopy images from scratch is challenging. Pre-training on broad, general vision datasets is crucial for successfully fine-tuning our model for specific tasks. In this work, we introduce a simplified approach called Adapt foundation models with a low-rank adaptation (LoRA) technique for easier customization. Our method, inspired by the DINOv2 foundation model, applies low-rank adaptation learning to tailor foundation models for capsule endoscopy diagnosis effectively. Unlike traditional fine-tuning methods, our strategy includes LoRA layers designed to absorb specific surgical domain knowledge. During the training process, we keep the main model (the backbone encoder) fixed and focus on optimizing the LoRA layers and the disease classification component. We tested our method on two publicly available datasets for capsule endoscopy disease classification. The results were impressive, with our model achieving 97.75% accuracy on the Kvasir-Capsule dataset and 98.81% on the Kvasirv2 dataset. Our solution demonstrates that foundation models can be adeptly adapted for capsule endoscopy diagnosis, highlighting that mere reliance on straightforward fine-tuning or pre-trained models from general computer vision tasks is inadequate for such specific applications.
Comment: To appear in ICBIR 2024
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2406.10508
رقم الأكسشن: edsarx.2406.10508
قاعدة البيانات: arXiv