Feedback Efficient Online Fine-Tuning of Diffusion Models

التفاصيل البيبلوغرافية
العنوان: Feedback Efficient Online Fine-Tuning of Diffusion Models
المؤلفون: Uehara, Masatoshi, Zhao, Yulai, Black, Kevin, Hajiramezanali, Ehsan, Scalia, Gabriele, Diamant, Nathaniel Lee, Tseng, Alex M, Levine, Sergey, Biancalani, Tommaso
سنة النشر: 2024
المجموعة: Computer Science
Quantitative Biology
Statistics
مصطلحات موضوعية: Computer Science - Machine Learning, Computer Science - Artificial Intelligence, Quantitative Biology - Quantitative Methods, Statistics - Machine Learning
الوصف: Diffusion models excel at modeling complex data distributions, including those of images, proteins, and small molecules. However, in many cases, our goal is to model parts of the distribution that maximize certain properties: for example, we may want to generate images with high aesthetic quality, or molecules with high bioactivity. It is natural to frame this as a reinforcement learning (RL) problem, in which the objective is to fine-tune a diffusion model to maximize a reward function that corresponds to some property. Even with access to online queries of the ground-truth reward function, efficiently discovering high-reward samples can be challenging: they might have a low probability in the initial distribution, and there might be many infeasible samples that do not even have a well-defined reward (e.g., unnatural images or physically impossible molecules). In this work, we propose a novel reinforcement learning procedure that efficiently explores on the manifold of feasible samples. We present a theoretical analysis providing a regret guarantee, as well as empirical validation across three domains: images, biological sequences, and molecules.
Comment: Accepted at ICML 2024
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2402.16359
رقم الأكسشن: edsarx.2402.16359
قاعدة البيانات: arXiv