How to Explore with Belief: State Entropy Maximization in POMDPs

التفاصيل البيبلوغرافية
العنوان: How to Explore with Belief: State Entropy Maximization in POMDPs
المؤلفون: Zamboni, Riccardo, Cirino, Duilio, Restelli, Marcello, Mutti, Mirco
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Machine Learning, Computer Science - Artificial Intelligence
الوصف: Recent works have studied *state entropy maximization* in reinforcement learning, in which the agent's objective is to learn a policy inducing high entropy over states visitation (Hazan et al., 2019). They typically assume full observability of the state of the system, so that the entropy of the observations is maximized. In practice, the agent may only get *partial* observations, e.g., a robot perceiving the state of a physical space through proximity sensors and cameras. A significant mismatch between the entropy over observations and true states of the system can arise in those settings. In this paper, we address the problem of entropy maximization over the *true states* with a decision policy conditioned on partial observations *only*. The latter is a generalization of POMDPs, which is intractable in general. We develop a memory and computationally efficient *policy gradient* method to address a first-order relaxation of the objective defined on *belief* states, providing various formal characterizations of approximation gaps, the optimization landscape, and the *hallucination* problem. This paper aims to generalize state entropy maximization to more realistic domains that meet the challenges of applications.
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2406.02295
رقم الأكسشن: edsarx.2406.02295
قاعدة البيانات: arXiv