Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Programmatic Representations for Agent Learning

Interpretable Reward Modeling with Active Concept Bottlenecks

Sonia Laguna · Katarzyna Kobalczyk · Julia Vogt · Mihaela van der Schaar


Abstract:

We introduce Concept Bottleneck Reward Models (CB-RM), a reward modeling framework that enables interpretable preference learning through selective concept annotation. Unlike standard RLHF methods that rely on opaque reward functions, CB-RM decomposes reward prediction into human-interpretable concepts. To make this framework efficient in low-supervision settings, we formalize an active learning strategy that dynamically acquires the most informative concept labels. We propose an acquisition function based on Expected Information Gain and show that it significantly accelerates concept learning without compromising preference accuracy. Evaluated on UltraFeedback, our method outperforms baselines in interpretability and sample efficiency, marking a step toward more transparent, auditable, and human-aligned reward models.

Chat is not available.