Data-Driven Upper Confidence Bounds with Near-Optimal Regret for Heavy-Tailed Bandits

التفاصيل البيبلوغرافية
العنوان: Data-Driven Upper Confidence Bounds with Near-Optimal Regret for Heavy-Tailed Bandits
المؤلفون: Tamás, Ambrus, Szentpéteri, Szabolcs, Csáji, Balázs Csanád
سنة النشر: 2024
المجموعة: Computer Science
Statistics
مصطلحات موضوعية: Computer Science - Machine Learning, Statistics - Machine Learning
الوصف: Stochastic multi-armed bandits (MABs) provide a fundamental reinforcement learning model to study sequential decision making in uncertain environments. The upper confidence bounds (UCB) algorithm gave birth to the renaissance of bandit algorithms, as it achieves near-optimal regret rates under various moment assumptions. Up until recently most UCB methods relied on concentration inequalities leading to confidence bounds which depend on moment parameters, such as the variance proxy, that are usually unknown in practice. In this paper, we propose a new distribution-free, data-driven UCB algorithm for symmetric reward distributions, which needs no moment information. The key idea is to combine a refined, one-sided version of the recently developed resampled median-of-means (RMM) method with UCB. We prove a near-optimal regret bound for the proposed anytime, parameter-free RMM-UCB method, even for heavy-tailed distributions.
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2406.05710
رقم الأكسشن: edsarx.2406.05710
قاعدة البيانات: arXiv