Skip to yearly menu bar Skip to main content


Poster
in
Affinity Workshop: New In ML

Weight-Based Interpretability with a Signed and Shrunk Quadratic Activation Function


Abstract:

Understanding the inner workings of machine learning models is critical for ensuring their reliability and robustness. Whilst many techniques in mechanistic interpretability focus on activation-driven analyses, being able to derive meaningful features directly from the weights of a neural network would provide greater guarantees and more computational efficiency. Existing techniques for analyzing model features through weights suffer from drawbacks such as reduced performance and data inefficiency. In this paper, we introduce Signed and Shrunk Quadratic (SSQ)—an activation function designed to allow Gated Linear Units (GLUs) to learn interpretable features without these drawbacks. Our experimental results that SSQ achieves performance competitive with state-of-the-art activation functions whilst enabling weight-based interpretability.

Chat is not available.