Skip to yearly menu bar Skip to main content


Poster
in
Workshop: The 2nd Workshop on Reliable and Responsible Foundation Models

Model Organisms for Emergent Misalignment

Edward Turner · Anna Soligo · Mia Taylor · Senthooran Rajamanoharan · Neel Nanda

Keywords: [ Misalignment ] [ Fine-tuning ] [ Interpretability ] [ LLMs ]


Abstract:

Recent work discovered Emergent Misalignment (EM): fine-tuning large language models on narrowly harmful datasets can lead them to become broadly misaligned. A survey of experts prior to publication revealed that this was highly unexpected, demonstrating critical gaps in our understanding of model alignment. In this work, we advance understanding of this phenomena and provide tools for future research. Using new narrowly misaligned datasets, we create improved model organisms that achieve 99\% coherence (vs. 67\% prior), work with smaller 0.5B parameter models (vs. 32B), and can induce misalignment using just a single rank-1 LoRA adapter. We demonstrate that EM occurs robustly across diverse model sizes, three model families, and numerous training protocols including full supervised fine-tuning. Leveraging these cleaner model organisms, we isolate a phase change that corresponds to learning the necessary directions to induce misalignment. Aligning large language models is critical for frontier AI safety, yet EM exposes how far we are from achieving this robustly. By distilling clean model organisms that isolate a minimal alignment-compromising change, and where this is learnt, we establish a foundation for future research into understanding and mitigating alignment risks in LLMs.

Chat is not available.