Skip to yearly menu bar Skip to main content


Poster

Representation Shattering in Transformers: A Synthetic Study with Knowledge Editing

Kento Nishi · Rahul Ramesh · Maya Okawa · Mikail Khona · Hidenori Tanaka · Ekdeep Singh Lubana

East Exhibition Hall A-B #E-1107
[ ] [ ]
Tue 15 Jul 4:30 p.m. PDT — 7 p.m. PDT

Abstract:

Knowledge Editing (KE) algorithms alter models' weights to perform targeted updates to incorrect, outdated, or otherwise unwanted factual associations. However, recent work has shown that applying KE can adversely affect models' broader factual recall accuracy and diminish their reasoning abilities. Although these studies give insights into the potential harms of KE algorithms, e.g., performance evaluations on benchmarks, little is understood about why such destructive failures occur. Motivated by this, we define a novel synthetic task in which a Transformer is trained from scratch to internalize a "structured" knowledge graph. The structure enforces relationships between entities of the graph, such that editing a factual association has "trickling effects" on other entities (e.g., altering X's parent is Y to Z affects who X's siblings' parent is). Through evaluations of edited models on this task, we show that KE inadvertently affects representations of entities beyond the targeted one, distorting relevant structures that allow a model to infer unseen knowledge about an entity. We call this phenomenon representation shattering and demonstrate that it degrades models' factual recall and reasoning performance. We further corroborate our findings in naturalistic settings with pre-trained Llama and Mamba models as well. Overall, our work yields a precise mechanistic hypothesis to explain why KE has adverse effects on model abilities.

Lay Summary:

When a large language model is patched after training to alter some memorized factual knowledge, the change often degrades its broader knowledge and reasoning capabilities. In the past, the cause behind this phenomenon has been unclear. We study this by building a small synthetic world of linked facts, training a Transformer, and then editing facts while tracking how the network's internal representations change.We show that knowledge editing systematically fractures the neat geometric structure that stores information. We call this "Representation Shattering." We show that the degree of shattering predicts how much the model's overall accuracy drops, and we verify the same effect in real models such as Llama-3 and Mamba.By revealing this hidden failure mode, our work offers a practical warning signal for risky edits and a direction for gentler, more reliable knowledge-updating techniques. Understanding and mitigating representation shattering will help future language models stay accurate, consistent, and trustworthy as they are regularly updated.

Chat is not available.