Bridging the Gap between Newton-Raphson Method and Regularized Policy Iteration

التفاصيل البيبلوغرافية
العنوان: Bridging the Gap between Newton-Raphson Method and Regularized Policy Iteration
المؤلفون: Li, Zeyang, Hu, Chuxiong, Wang, Yunan, Zhan, Guojian, Li, Jie, Li, Shengbo Eben
سنة النشر: 2023
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Machine Learning
الوصف: Regularization is one of the most important techniques in reinforcement learning algorithms. The well-known soft actor-critic algorithm is a special case of regularized policy iteration where the regularizer is chosen as Shannon entropy. Despite some empirical success of regularized policy iteration, its theoretical underpinnings remain unclear. This paper proves that regularized policy iteration is strictly equivalent to the standard Newton-Raphson method in the condition of smoothing out Bellman equation with strongly convex functions. This equivalence lays the foundation of a unified analysis for both global and local convergence behaviors of regularized policy iteration. We prove that regularized policy iteration has global linear convergence with the rate being $\gamma$ (discount factor). Furthermore, this algorithm converges quadratically once it enters a local region around the optimal value. We also show that a modified version of regularized policy iteration, i.e., with finite-step policy evaluation, is equivalent to inexact Newton method where the Newton iteration formula is solved with truncated iterations. We prove that the associated algorithm achieves an asymptotic linear convergence rate of $\gamma^M$ in which $M$ denotes the number of steps carried out in policy evaluation. Our results take a solid step towards a better understanding of the convergence properties of regularized policy iteration algorithms.
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2310.07211
رقم الأكسشن: edsarx.2310.07211
قاعدة البيانات: arXiv