Poster
Low-Rank Adapting Models for Sparse Autoencoders
Matthew Chen · Josh Engels · Max Tegmark
East Exhibition Hall A-B #E-1003
Large language models (LLMs) have demonstrated profound capabilities. In an effort to ensure these models are not doing something us humans disapprove of, researchers are interested in understanding the underlying mechanisms these models use to function. One tool researchers use has recently gained traction for being able to translate the models' internal representations into human-interpretable concepts.Unfortunately, the tool seems to fall short of fully interpreting the model's internal thoughts, as when we restrict our lens to only the interpretable concepts the tool finds, the model performs significantly worse. In other words, we cannot be confident the model is faithful to our interpretation of its internal mechanism.In this work, we explore how we can cheaply train the model to more faithfully use the interpretable concepts we do identify with the tool without sacrificing its profound capabilities. We can therefore be more confident than before in understanding what this modified model is doing.