Learning to Perceive in Deep Model-Free Reinforcement Learning

التفاصيل البيبلوغرافية
العنوان: Learning to Perceive in Deep Model-Free Reinforcement Learning
المؤلفون: Querido, Gonçalo, Sardinha, Alberto, Melo, Francisco S.
سنة النشر: 2023
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Machine Learning, Computer Science - Artificial Intelligence, Computer Science - Computer Vision and Pattern Recognition
الوصف: This work proposes a novel model-free Reinforcement Learning (RL) agent that is able to learn how to complete an unknown task having access to only a part of the input observation. We take inspiration from the concepts of visual attention and active perception that are characteristic of humans and tried to apply them to our agent, creating a hard attention mechanism. In this mechanism, the model decides first which region of the input image it should look at, and only after that it has access to the pixels of that region. Current RL agents do not follow this principle and we have not seen these mechanisms applied to the same purpose as this work. In our architecture, we adapt an existing model called recurrent attention model (RAM) and combine it with the proximal policy optimization (PPO) algorithm. We investigate whether a model with these characteristics is capable of achieving similar performance to state-of-the-art model-free RL agents that access the full input observation. This analysis is made in two Atari games, Pong and SpaceInvaders, which have a discrete action space, and in CarRacing, which has a continuous action space. Besides assessing its performance, we also analyze the movement of the attention of our model and compare it with what would be an example of the human behavior. Even with such visual limitation, we show that our model matches the performance of PPO+LSTM in two of the three games tested.
Comment: 8 pages; 7 figures; fixed author name; added link for code
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2301.03730
رقم الأكسشن: edsarx.2301.03730
قاعدة البيانات: arXiv