Offline Policy Optimization in RL with Variance Regularizaton

التفاصيل البيبلوغرافية
العنوان: Offline Policy Optimization in RL with Variance Regularizaton
المؤلفون: Islam, Riashat, Sinha, Samarth, Bharadhwaj, Homanga, Arnob, Samin Yeasar, Yang, Zhuoran, Garg, Animesh, Wang, Zhaoran, Li, Lihong, Precup, Doina
سنة النشر: 2022
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Machine Learning
الوصف: Learning policies from fixed offline datasets is a key challenge to scale up reinforcement learning (RL) algorithms towards practical applications. This is often because off-policy RL algorithms suffer from distributional shift, due to mismatch between dataset and the target policy, leading to high variance and over-estimation of value functions. In this work, we propose variance regularization for offline RL algorithms, using stationary distribution corrections. We show that by using Fenchel duality, we can avoid double sampling issues for computing the gradient of the variance regularizer. The proposed algorithm for offline variance regularization (OVAR) can be used to augment any existing offline policy optimization algorithms. We show that the regularizer leads to a lower bound to the offline policy optimization objective, which can help avoid over-estimation errors, and explains the benefits of our approach across a range of continuous control domains when compared to existing state-of-the-art algorithms.
Comment: Old Draft, Offline RL Workshop, NeurIPS'20
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2212.14405
رقم الأكسشن: edsarx.2212.14405
قاعدة البيانات: arXiv