Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Methods and Opportunities at Small Scale (MOSS)

Evaluating Sparse Autoencoders: From Shallow Design to Matching Pursuit

ValĂ©rie Costa · Thomas Fel · Ekdeep Singh Lubana · Bahareh Tolooshams · Demba Ba

Keywords: [ Representation Learning ] [ Sparse Autoencoders ] [ Interpretability ] [ Dictionary Learning ]


Abstract:

Sparse autoencoders (SAEs) have recently become central tools for interpretability, leveraging dictionary learning principles to extract sparse, interpretable features from neural representations whose underlying structure is typically unknown. This paper evaluates SAEs in a controlled setting using MNIST, which reveals that current shallow architectures implicitly rely on a quasi-orthogonality assumption that limits the ability to extract correlated features. To move beyond this, we compare them with an iterative SAE that unrolls Matching Pursuit (MP-SAE), enabling the residual-guided extraction of correlated features that arise in hierarchical settings such as handwritten digit generation while guaranteeing monotonic improvement of the reconstruction as more atoms are selected.

Chat is not available.