تقرير
Federated Q-Learning with Reference-Advantage Decomposition: Almost Optimal Regret and Logarithmic Communication Cost
العنوان: | Federated Q-Learning with Reference-Advantage Decomposition: Almost Optimal Regret and Logarithmic Communication Cost |
---|---|
المؤلفون: | Zheng, Zhong, Zhang, Haochen, Xue, Lingzhou |
سنة النشر: | 2024 |
المجموعة: | Computer Science Statistics |
مصطلحات موضوعية: | Statistics - Machine Learning, Computer Science - Machine Learning |
الوصف: | In this paper, we consider model-free federated reinforcement learning for tabular episodic Markov decision processes. Under the coordination of a central server, multiple agents collaboratively explore the environment and learn an optimal policy without sharing their raw data. Despite recent advances in federated Q-learning algorithms achieving near-linear regret speedup with low communication cost, existing algorithms only attain suboptimal regrets compared to the information bound. We propose a novel model-free federated Q-learning algorithm, termed FedQ-Advantage. Our algorithm leverages reference-advantage decomposition for variance reduction and operates under two distinct mechanisms: synchronization between the agents and the server, and policy update, both triggered by events. We prove that our algorithm not only requires a lower logarithmic communication cost but also achieves an almost optimal regret, reaching the information bound up to a logarithmic factor and near-linear regret speedup compared to its single-agent counterpart when the time horizon is sufficiently large. |
نوع الوثيقة: | Working Paper |
URL الوصول: | http://arxiv.org/abs/2405.18795 |
رقم الأكسشن: | edsarx.2405.18795 |
قاعدة البيانات: | arXiv |
الوصف غير متاح. |