تقرير
Augmenting Unsupervised Reinforcement Learning with Self-Reference
العنوان: | Augmenting Unsupervised Reinforcement Learning with Self-Reference |
---|---|
المؤلفون: | Zhao, Andrew, Zhu, Erle, Lu, Rui, Lin, Matthieu, Liu, Yong-Jin, Huang, Gao |
سنة النشر: | 2023 |
المجموعة: | Computer Science |
مصطلحات موضوعية: | Computer Science - Machine Learning, Computer Science - Artificial Intelligence, Computer Science - Robotics |
الوصف: | Humans possess the ability to draw on past experiences explicitly when learning new tasks and applying them accordingly. We believe this capacity for self-referencing is especially advantageous for reinforcement learning agents in the unsupervised pretrain-then-finetune setting. During pretraining, an agent's past experiences can be explicitly utilized to mitigate the nonstationarity of intrinsic rewards. In the finetuning phase, referencing historical trajectories prevents the unlearning of valuable exploratory behaviors. Motivated by these benefits, we propose the Self-Reference (SR) approach, an add-on module explicitly designed to leverage historical information and enhance agent performance within the pretrain-finetune paradigm. Our approach achieves state-of-the-art results in terms of Interquartile Mean (IQM) performance and Optimality Gap reduction on the Unsupervised Reinforcement Learning Benchmark for model-free methods, recording an 86% IQM and a 16% Optimality Gap. Additionally, it improves current algorithms by up to 17% IQM and reduces the Optimality Gap by 31%. Beyond performance enhancement, the Self-Reference add-on also increases sample efficiency, a crucial attribute for real-world applications. Comment: Preprint |
نوع الوثيقة: | Working Paper |
URL الوصول: | http://arxiv.org/abs/2311.09692 |
رقم الأكسشن: | edsarx.2311.09692 |
قاعدة البيانات: | arXiv |
الوصف غير متاح. |