Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Exploration in AI Today (EXAIT)

Scalable and Efficient Exploration via Intrinsic Rewards in Continuous-time Dynamical Systems

Klemens Iten · Andreas Krause

Keywords: [ epistemic uncertainty ] [ continuous-time reinforcement learning ] [ model-based RL ] [ exploration-exploitation trade-off ] [ intrinsic rewards ]


Abstract:

Reinforcement learning algorithms are typically designed for discrete-time dynamics, even though the underlying real-world control systems are often continuous in time. In this paper, we study the problem of continuous-time reinforcement learning, where the unknown system dynamics are represented using nonlinear ordinary differential equations (ODEs). We leverage probabilistic models, such as Gaussian processes and Bayesian neural networks, to learn an uncertainty-aware model of the underlying ODE. Our algorithm, COMBRL, greedily maximizes a weighted sum of the extrinsic reward and model epistemic uncertainty. We show that this approach has sublinear regret for the continuous-time setting. Furthermore, in the unsupervised RL setting (i.e., without extrinsic rewards), we provide a sample complexity bound. In our experiments, we evaluate COMBRL in the standard and unsupervised RL settings and show that it outperforms the baselines across several deep RL tasks.

Chat is not available.