RLingua: Improving Reinforcement Learning Sample Efficiency in Robotic Manipulations With Large Language Models

التفاصيل البيبلوغرافية
العنوان: RLingua: Improving Reinforcement Learning Sample Efficiency in Robotic Manipulations With Large Language Models
المؤلفون: Chen, Liangliang, Lei, Yutian, Jin, Shiyu, Zhang, Ying, Zhang, Liangjun
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Robotics, Computer Science - Artificial Intelligence, Computer Science - Human-Computer Interaction, Computer Science - Machine Learning
الوصف: Reinforcement learning (RL) has demonstrated its capability in solving various tasks but is notorious for its low sample efficiency. In this paper, we propose RLingua, a framework that can leverage the internal knowledge of large language models (LLMs) to reduce the sample complexity of RL in robotic manipulations. To this end, we first present a method for extracting the prior knowledge of LLMs by prompt engineering so that a preliminary rule-based robot controller for a specific task can be generated in a user-friendly manner. Despite being imperfect, the LLM-generated robot controller is utilized to produce action samples during rollouts with a decaying probability, thereby improving RL's sample efficiency. We employ TD3, the widely-used RL baseline method, and modify the actor loss to regularize the policy learning towards the LLM-generated controller. RLingua also provides a novel method of improving the imperfect LLM-generated robot controllers by RL. We demonstrate that RLingua can significantly reduce the sample complexity of TD3 in four robot tasks of panda_gym and achieve high success rates in 12 sampled sparsely rewarded robot tasks in RLBench, where the standard TD3 fails. Additionally, We validated RLingua's effectiveness in real-world robot experiments through Sim2Real, demonstrating that the learned policies are effectively transferable to real robot tasks. Further details about our work are available at our project website https://rlingua.github.io.
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2403.06420
رقم الأكسشن: edsarx.2403.06420
قاعدة البيانات: arXiv