Scaling Team Coordination on Graphs with Reinforcement Learning

التفاصيل البيبلوغرافية
العنوان: Scaling Team Coordination on Graphs with Reinforcement Learning
المؤلفون: Limbu, Manshi, Hu, Zechen, Wang, Xuan, Shishika, Daigo, Xiao, Xuesu
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Robotics
الوصف: This paper studies Reinforcement Learning (RL) techniques to enable team coordination behaviors in graph environments with support actions among teammates to reduce the costs of traversing certain risky edges in a centralized manner. While classical approaches can solve this non-standard multi-agent path planning problem by converting the original Environment Graph (EG) into a Joint State Graph (JSG) to implicitly incorporate the support actions, those methods do not scale well to large graphs and teams. To address this curse of dimensionality, we propose to use RL to enable agents to learn such graph traversal and teammate supporting behaviors in a data-driven manner. Specifically, through a new formulation of the team coordination on graphs with risky edges problem into Markov Decision Processes (MDPs) with a novel state and action space, we investigate how RL can solve it in two paradigms: First, we use RL for a team of agents to learn how to coordinate and reach the goal with minimal cost on a single EG. We show that RL efficiently solves problems with up to 20/4 or 25/3 nodes/agents, using a fraction of the time needed for JSG to solve such complex problems; Second, we learn a general RL policy for any $N$-node EGs to produce efficient supporting behaviors. We present extensive experiments and compare our RL approaches against their classical counterparts.
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2403.05787
رقم الأكسشن: edsarx.2403.05787
قاعدة البيانات: arXiv