Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Actionable Interpretability

Probing for Arithmetic Errors in Language Models

Yucheng Sun · Alessandro Stolfo · Mrinmaya Sachan

[ ] [ Project Page ]
Sat 19 Jul 10:40 a.m. PDT — 11:40 a.m. PDT

Abstract:

We investigate whether internal activations in language models can be used to detect arithmetic errors. Starting with a controlled setting of 3-digit addition, we show that simple probes can accurately decode both the model’s predicted output and the correct answer from hidden states, regardless of whether the model’s output is correct. Building on this, we train lightweight error detectors that predict model correctness with over 90\% accuracy. We then extend our analysis to multi-step arithmetic reasoning in the GSM8K dataset and find that probes trained on simple arithmetic generalize well to this more complex setting, maintaining high accuracy and revealing consistent internal representations. Finally, we demonstrate that these probes can guide selective re-prompting of erroneous reasoning steps, improving task accuracy with minimal disruption to correct outputs. Our findings suggest that arithmetic errors can be anticipated from internal activations alone, and that simple probes offer a viable path toward lightweight model self-correction.

Chat is not available.