Hybrid diffusion models: combining supervised and generative pretraining for label-efficient fine-tuning of segmentation models

التفاصيل البيبلوغرافية
العنوان: Hybrid diffusion models: combining supervised and generative pretraining for label-efficient fine-tuning of segmentation models
المؤلفون: Sauvalle, Bruno, Salzmann, Mathieu
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Computer Vision and Pattern Recognition, Computer Science - Machine Learning, I.4.6
الوصف: We are considering in this paper the task of label-efficient fine-tuning of segmentation models: We assume that a large labeled dataset is available and allows to train an accurate segmentation model in one domain, and that we have to adapt this model on a related domain where only a few samples are available. We observe that this adaptation can be done using two distinct methods: The first method, supervised pretraining, is simply to take the model trained on the first domain using classical supervised learning, and fine-tune it on the second domain with the available labeled samples. The second method is to perform self-supervised pretraining on the first domain using a generic pretext task in order to get high-quality representations which can then be used to train a model on the second domain in a label-efficient way. We propose in this paper to fuse these two approaches by introducing a new pretext task, which is to perform simultaneously image denoising and mask prediction on the first domain. We motivate this choice by showing that in the same way that an image denoiser conditioned on the noise level can be considered as a generative model for the unlabeled image distribution using the theory of diffusion models, a model trained using this new pretext task can be considered as a generative model for the joint distribution of images and segmentation masks under the assumption that the mapping from images to segmentation masks is deterministic. We then empirically show on several datasets that fine-tuning a model pretrained using this approach leads to better results than fine-tuning a similar model trained using either supervised or unsupervised pretraining only.
Comment: 19 pages
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2408.03433
رقم الأكسشن: edsarx.2408.03433
قاعدة البيانات: arXiv