Poster
in
Workshop: Actionable Interpretability
Internal states before wait modulate reasoning patterns
Dmitrii Troitskii · Koyena Pal · Chris Wendler · Callum McDougall · Neel Nanda
Prior work has shown that a significant driver of performance in reasoning models is their ability to reason and self-correct. A distinctive marker in these reasoning traces is the token wait, which often signals reasoning behavior such as backtracking. Despite being such a complex behavior, little is understood of exactly why models do or do not decide to reason in this particular manner, which limits our understanding of what makes a reasoning model so effective. In this work, we address the question whether model's latents preceding wait tokens contain relevant information for modulating the subsequent reasoning process. To this end we train crosscoders at multiple layers layers of DeepSeek-R1-Distill-Llama-8B and its base version, and, introduce a novel latent attribution patching technique for the crosscoder setting. Using our technique, we locate a small set of features relevant for promoting/surpressing wait tokens' probabilities. Finally, through a targeted series of experiments analyzing max-activating examples and causal interventions, we show that many of our identified features indeed are relevant for the reasoning process and give rise to different types of reasoning patterns such as restarting from the beginning, recalling prior knowledge, expressing uncertainty, and double-checking.