Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Actionable Interpretability

Probing and Steering Evaluation Awareness of Language Models

Jord Nguyen · Hoang Khiem · Carlo Attubato · Felix Hofstätter

[ ] [ Project Page ]
Sat 19 Jul 1 p.m. PDT — 2 p.m. PDT

Abstract:

Language models can distinguish between testing and deployment phases — a capability known as evaluation awareness. This capability has significant safety implications, potentially undermining the reliability of evaluations and enabling deceptive behaviours. In this paper, we study evaluation awareness in Llama-3.3-70B-Instruct. We show that linear probes can separate real-world evaluation and deployment prompts, suggesting that current models internally represent this distinction. We also find that current safety evaluations are correctly classified by the probes, suggesting that they already appear artificial or inauthentic to models. Finally, we show that using evaluations-relevant SAE features to steer models can partially uncover sandbagging. Our findings underscore the importance of ensuring trustworthy evaluations and understanding deceptive capabilities. More broadly, our work showcases how model internals may be leveraged to support blackbox methods in safety evaluations, especially for future models more competent at evaluation awareness and deception.

Chat is not available.