Skip to main navigation Skip to search Skip to main content

An Exploration-free Method for a Linear Stochastic Bandit Driven by a Linear Gaussian Dynamical System

  • Jonathan Gornet
  • , Yilin Mo
  • , Bruno Sinopoli

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

In stochastic multi-armed bandits, a major problem the learner faces is the trade-off between exploration and exploitation. Recently, exploration-free methods - methods that commit to the action predicted to return the highest reward - have been studied from the perspective of linear bandits. In this paper, we introduce a linear bandit setting where the reward is the output of a linear Gaussian dynamical system. Motivated by a problem encountered in hyperparameter optimization for reinforcement learning, where the number of actions is much higher than the number of training iterations, we propose Kalman filter Observability Dependent Exploration (KODE), an exploration-free method that utilizes the Kalman filter predictions to select actions. Our major contribution of this work is our discovery that the performance of the proposed method is dependent on the observability properties of the underlying linear Gaussian dynamical system. We evaluate KODE via two different metrics: regret, which is the cumulative expected difference between the highest possible reward and the reward sampled by KODE, and action alignment, which measures how closely KODE's chosen action aligns with the linear Gaussian dynamical system's state variable. To provide intuition on the performance, we prove that KODE implicitly encourages the learner to explore actions depending on the observability of the linear Gaussian dynamical system. This method is compared to several well-known stochastic multi-armed bandit algorithms to validate our theoretical results.

Original languageEnglish
Title of host publication2025 IEEE 64th Conference on Decision and Control, CDC 2025
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages5493-5500
Number of pages8
ISBN (Electronic)9798331526276
DOIs
StatePublished - 2025
Event64th IEEE Conference on Decision and Control, CDC 2025 - Rio de Janeiro, Brazil
Duration: Dec 9 2025Dec 12 2025

Publication series

NameProceedings of the IEEE Conference on Decision and Control
ISSN (Print)0743-1546
ISSN (Electronic)2576-2370

Conference

Conference64th IEEE Conference on Decision and Control, CDC 2025
Country/TerritoryBrazil
CityRio de Janeiro
Period12/9/2512/12/25

Fingerprint

Dive into the research topics of 'An Exploration-free Method for a Linear Stochastic Bandit Driven by a Linear Gaussian Dynamical System'. Together they form a unique fingerprint.

Cite this