Skip to yearly menu bar Skip to main content


Poster

Compute Optimal Inference and Provable Amortisation Gap in Sparse Autoencoders

Charles O'Neill · Alim Gumran · David Klindt

East Exhibition Hall A-B #E-3212
[ ] [ ]
Thu 17 Jul 11 a.m. PDT — 1:30 p.m. PDT

Abstract:

A recent line of work has shown promise in using sparse autoencoders (SAEs) to uncover interpretable features in neural network representations. However, the simple linear-nonlinear encoding mechanism in SAEs limits their ability to perform accurate sparse inference. Using compressed sensing theory, we prove that an SAE encoder is inherently insufficient for accurate sparse inference, even in solvable cases. We then decouple encoding and decoding processes to empirically explore conditions where more sophisticated sparse inference methods outperform traditional SAE encoders. Our results reveal substantial performance gains with minimal compute increases in correct inference of sparse codes. We demonstrate this generalises to SAEs applied to large language models, where more expressive encoders achieve greater interpretability. This work opens new avenues for understanding neural network representations and analysing large language model activations.

Lay Summary:

Understanding how neural networks work internally is crucial as they're increasingly used in important decisions. Scientists use tools called sparse autoencoders (SAEs) to extract interpretable features from these complex models, but there's a fundamental problem: SAEs use overly simple methods that can't recover the best possible sparse representations.This paper proves mathematically that SAEs have an inherent limitation: they cannot achieve optimal sparse inference even when it's theoretically possible. The authors show that more sophisticated encoding methods, like multilayer perceptrons, significantly outperform traditional SAEs while using only slightly more computation.When tested on large language models like GPT-2, these better encoding methods actually produced more interpretable features than simpler approaches, contradicting the common belief that simpler methods are necessary for interpretability. This work opens new possibilities for better understanding how neural networks represent information internally.

Chat is not available.