R\'enyi State Entropy for Exploration Acceleration in Reinforcement Learning

التفاصيل البيبلوغرافية
العنوان: R\'enyi State Entropy for Exploration Acceleration in Reinforcement Learning
المؤلفون: Yuan, Mingqi, Pun, Man-on, Wang, Dong
سنة النشر: 2022
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Machine Learning, Computer Science - Artificial Intelligence
الوصف: One of the most critical challenges in deep reinforcement learning is to maintain the long-term exploration capability of the agent. To tackle this problem, it has been recently proposed to provide intrinsic rewards for the agent to encourage exploration. However, most existing intrinsic reward-based methods proposed in the literature fail to provide sustainable exploration incentives, a problem known as vanishing rewards. In addition, these conventional methods incur complex models and additional memory in their learning procedures, resulting in high computational complexity and low robustness. In this work, a novel intrinsic reward module based on the R\'enyi entropy is proposed to provide high-quality intrinsic rewards. It is shown that the proposed method actually generalizes the existing state entropy maximization methods. In particular, a $k$-nearest neighbor estimator is introduced for entropy estimation while a $k$-value search method is designed to guarantee the estimation accuracy. Extensive simulation results demonstrate that the proposed R\'enyi entropy-based method can achieve higher performance as compared to existing schemes.
Comment: 10 pages, 6 figures. arXiv admin note: substantial text overlap with arXiv:2203.02298
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2203.04297
رقم الأكسشن: edsarx.2203.04297
قاعدة البيانات: arXiv