Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Methods and Opportunities at Small Scale (MOSS)

Transformers Pretrained on Procedural Data Contain Modular Structures for Algorithmic Reasoning

Zachary Shinnick · Liangze Jiang · Hemanth Saratchandran · Anton Hengel · Damien Teney

Keywords: [ Inductive Biases ] [ Transformers ] [ Pre-training ] [ Procedural Data ] [ Algorithmic Reasoning ]


Abstract: $\textbf{Context.}$Pretraining on large, semantically rich datasets is key for developing language models.Surprisingly, recent studies have shown that even synthetic data, generated procedurally through simple semantic-free algorithms, can yield some of the same benefits as natural language pretraining. It is unclear $\textit{what}$ specific capabilities such simple synthetic data instils in a model, $\textit{where}$ these capabilities reside in the architecture, and $\textit{how}$ they manifest within its weights.$\textbf{Findings.}$In this short paper, we identify several beneficial forms of procedural data, together with specific algorithmic reasoning skills that improve in small transformers. Our core finding is that different procedural rules instil $\textit{distinct but complementary inductive structures}$ in the model. With extensive ablations and partial-transfer experiments, we discover that these structures reside in different parts of the model. Attention layers often carry the most transferable information, but some pretraining rules impart useful structure to MLP blocks instead.Most interestingly, the structures induced by multiple rules can be composed to jointly reinforce multiple capabilities.$\textbf{Implications.}$These results suggest an exciting possibility of disentangling the acquisition of knowledge from reasoning in language models, with the goal of improving their robustness and data efficiency.

Chat is not available.