Information-Theoretic State Variable Selection for Reinforcement Learning

التفاصيل البيبلوغرافية
العنوان: Information-Theoretic State Variable Selection for Reinforcement Learning
المؤلفون: Westphal, Charles, Hailes, Stephen, Musolesi, Mirco
سنة النشر: 2024
المجموعة: Computer Science
Mathematics
مصطلحات موضوعية: Computer Science - Machine Learning, Computer Science - Artificial Intelligence, Computer Science - Information Theory
الوصف: Identifying the most suitable variables to represent the state is a fundamental challenge in Reinforcement Learning (RL). These variables must efficiently capture the information necessary for making optimal decisions. In order to address this problem, in this paper, we introduce the Transfer Entropy Redundancy Criterion (TERC), an information-theoretic criterion, which determines if there is \textit{entropy transferred} from state variables to actions during training. We define an algorithm based on TERC that provably excludes variables from the state that have no effect on the final performance of the agent, resulting in more sample efficient learning. Experimental results show that this speed-up is present across three different algorithm classes (represented by tabular Q-learning, Actor-Critic, and Proximal Policy Optimization (PPO)) in a variety of environments. Furthermore, to highlight the differences between the proposed methodology and the current state-of-the-art feature selection approaches, we present a series of controlled experiments on synthetic data, before generalizing to real-world decision-making tasks. We also introduce a representation of the problem that compactly captures the transfer of information from state variables to actions as Bayesian networks.
Comment: 47 pages, 12 figures
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2401.11512
رقم الأكسشن: edsarx.2401.11512
قاعدة البيانات: arXiv