Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Actionable Interpretability

Insights into a radiology-specialised multimodal large language model with sparse autoencoders

Kenza Bouzid · Shruthi Bannur · Felix Meissen · Daniel Coelho de Castro · Anton Schwaighofer · Javier Alvarez-Valle · Stephanie L Hyland

[ ] [ Project Page ]
Sat 19 Jul 10:40 a.m. PDT — 11:40 a.m. PDT

Abstract:

Interpretability can improve the safety, transparency and trust of artificial intelligence (AI) models, which is especially important in healthcare applications where decisions often carry significant consequences. Mechanistic interpretability, particularly through the use of sparse autoencoder (SAE), offers a promising approach for uncovering human-interpretable features within large transformer-based models. In this study, we apply Matryoshka-SAE to the radiology-specialised multimodal large language model, MAIRA-2, to interpret its internal representations.Using large-scale automated interpretability of the SAE features, we identify a range of clinically relevant concepts---including medical devices (e.g., line and tube placements, pacemaker presence), pathologies such as pleural effusion and cardiomegaly, longitudinal changes and textual features.We further examine the influence of these features on model behaviour through steering, demonstrating directional control over generations with mixed success. Our results reveal practical and methodological challenges, yet they offer initial insights into the internal concepts learned by MAIRA-2---marking a step toward deeper mechanistic understanding and interpretability of a radiology-adapted multimodal large language model, and paving the way for improved model transparency.

Chat is not available.