Towards Robust Policy: Enhancing Offline Reinforcement Learning with Adversarial Attacks and Defenses

التفاصيل البيبلوغرافية
العنوان: Towards Robust Policy: Enhancing Offline Reinforcement Learning with Adversarial Attacks and Defenses
المؤلفون: Nguyen, Thanh, Luu, Tung M., Ton, Tri, Yoo, Chang D.
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Machine Learning, Computer Science - Artificial Intelligence, Computer Science - Robotics
الوصف: Offline reinforcement learning (RL) addresses the challenge of expensive and high-risk data exploration inherent in RL by pre-training policies on vast amounts of offline data, enabling direct deployment or fine-tuning in real-world environments. However, this training paradigm can compromise policy robustness, leading to degraded performance in practical conditions due to observation perturbations or intentional attacks. While adversarial attacks and defenses have been extensively studied in deep learning, their application in offline RL is limited. This paper proposes a framework to enhance the robustness of offline RL models by leveraging advanced adversarial attacks and defenses. The framework attacks the actor and critic components by perturbing observations during training and using adversarial defenses as regularization to enhance the learned policy. Four attacks and two defenses are introduced and evaluated on the D4RL benchmark. The results show the vulnerability of both the actor and critic to attacks and the effectiveness of the defenses in improving policy robustness. This framework holds promise for enhancing the reliability of offline RL models in practical scenarios.
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2405.11206
رقم الأكسشن: edsarx.2405.11206
قاعدة البيانات: arXiv