Skip to yearly menu bar Skip to main content


Poster

Neutral residues: revisiting adapters for model extension

Franck TALLA · Edouard Grave · Herve Jegou

East Exhibition Hall A-B #E-2805
[ ] [ ]
Thu 17 Jul 11 a.m. PDT — 1:30 p.m. PDT

Abstract:

We address the problem of extending a pre-trained large language model to a new domain that was not seen during training. Standard techniques, such as fine-tuning or low-rank adaptation (LoRA) are successful at domain adaptation, but do not formally add capacity to the model. This often leads to a trade-off, between performing well on the new domain vs. degrading performance on the original domain.Here, we propose to revisit and improve adapters to extend LLMs. Our paper analyzes this extension problem from three angles: data, architecture and training procedure, which are advantageously considered jointly. The resulting method, called neutral residues, modifies adapters in a way that leads to each new residual block to output near-zeros on the original domain. This solution leads to strong results when adapting a state-of-the-art model originally trained on English to a new language. Neutral residues significantly outperforms competing approaches such as fine-tuning, LoRA or vanilla adapters in terms of the trade-off between learning the new language and not forgetting English.

Lay Summary:

Large language models (LLMs) are powerful tools trained on enormous amounts of text. But when we want them to take on something new—like a different language or a specialized area they weren’t originally trained on—it’s not practical to retrain them from scratch. That process is expensive, slow, and uses a lot of energy. A common shortcut is to “finetune” the model on new data, but this often makes the model forget what it already knew—like losing its English skills when learning a new language.Our research introduces a method called neutral residues, which helps models learn new things without interfering with their original abilities. Instead of changing the whole system, we add small, specialized parts that only activate for the new domain and stay “silent” otherwise. Because we only train these small additions, the process is much cheaper than retraining the full model.This makes it easier, more affordable, and more sustainable to adapt powerful models. It opens the door for smaller organizations to personalize cutting-edge AI—and helps reduce the environmental impact of training entirely new systems.

Chat is not available.