Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Methods and Opportunities at Small Scale (MOSS)

What Happens During the Loss Plateau? Understanding Abrupt Learning in Transformers

Pulkit Gopalani · Wei Hu

Keywords: [ interpretability ] [ transformer training dynamics ] [ attention map ] [ science of language models ] [ Abrupt learning ]


Abstract:

Training Transformers on algorithmic tasks frequently demonstrates an intriguing abrupt learning phenomenon: an extended performance plateau followed by a sudden, sharp improvement. This work investigates the underlying mechanisms for such dynamics, primarily in shallow Transformers. We reveal that during the plateau, the model often develops an interpretable partial solution while simultaneously exhibiting a strong repetition bias in their outputs. This output degeneracy is accompanied by internal representation collapse, where hidden states across different tokens become nearly parallel. We further identify the slow learning of optimal attention maps as a key bottleneck. Hidden progress in attention configuration during the plateau precedes the eventual rapid convergence, and directly intervening on attention significantly alters plateau duration and the severity of repetition bias and representational collapse. We validate that these phenomena—repetition bias and representation collapse—are not artifacts of toy setups but also manifest in the early pre-training stage of LLMs like Pythia and OLMo.

Chat is not available.