Skip to yearly menu bar Skip to main content


Spotlight Poster

Controlling Underestimation Bias in Constrained Reinforcement Learning for Safe Exploration

Shiqing Gao · Jiaxin Ding · Luoyi Fu · Xinbing Wang

West Exhibition Hall B2-B3 #W-612
[ ] [ ]
Tue 15 Jul 4:30 p.m. PDT — 7 p.m. PDT
 
Oral presentation: Oral 2C Reinforcement Learning
Tue 15 Jul 3:30 p.m. PDT — 4:30 p.m. PDT

Abstract:

Constrained Reinforcement Learning (CRL) aims to maximize cumulative rewards while satisfying constraints. However, existing CRL algorithms often encounter significant constraint violations during training, limiting their applicability in safety-critical scenarios. In this paper, we identify the underestimation of the cost value function as a key factor contributing to these violations. To address this issue, we propose the Memory-driven Intrinsic Cost Estimation (MICE) method, which introduces intrinsic costs to mitigate underestimation and control bias to promote safer exploration. Inspired by flashbulb memory, where humans vividly recall dangerous experiences to avoid risks, MICE constructs a memory module that stores previously explored unsafe states to identify high-cost regions. The intrinsic cost is formulated as the pseudo-count of the current state visiting these risk regions. Furthermore, we propose an extrinsic-intrinsic cost value function that incorporates intrinsic costs and adopts a bias correction strategy. Using this function, we formulate an optimization objective within the trust region, along with corresponding optimization methods. Theoretically, we provide convergence guarantees for the proposed cost value function and establish the worst-case constraint violation for the MICE update. Extensive experiments demonstrate that MICE significantly reduces constraint violations while preserving policy performance comparable to baselines.

Lay Summary:

Many AI systems learn by trial and error, but in safety-critical applications like robotics or autonomous driving, this can lead to costly or dangerous mistakes. We noticed that existing algorithms often underestimate risks, causing them to violate safety constraints during training.To tackle this, we developed a method inspired by how people remember and avoid dangerous experiences. Our approach, called MICE, lets AI “remember” risky situations it has seen before. By tracking and learning from these past dangers, our method helps the AI become more cautious and reduces the chance of making unsafe decisions.With this new approach, we found that AI systems could train much more safely without losing their ability to perform well. This makes our work a step forward in deploying AI in real-world scenarios where safety can’t be compromised, such as large language model, robotics, or autonomous driving.

Chat is not available.