Skip to yearly menu bar Skip to main content


Poster
in
Workshop: 2nd Workshop on Test-Time Adaptation: Putting Updates to the Test (PUT)

Mitigating Forgetting in Low Rank Adaptation

Joanna Sliwa · Frank Schneider · Philipp Hennig · Jose Miguel Hernandez-Lobato

[ ] [ Project Page ]
Fri 18 Jul 2:30 p.m. PDT — 3:15 p.m. PDT

Abstract:

Parameter-efficient finetuning (PEFT) allows to quickly adapt large pre-trained language models to different downstream applications. However, this process often leads to catastrophic forgetting of the model’s original domain knowledge. We address this issue with LALoRA, a weight-space regularization method that applies a Laplace Approximation to Low-Rank Adaptation. We estimate how confident the model is in each parameter and constrain updates in high-confidence directions. This preserves original knowledge while still allowing efficient target domain learning. We showcase the improved learning-forgetting trade-off compared to existing baseline methods and discuss different approximations of the loss landscape curvature, through which we estimate the parameters' uncertainty.

Chat is not available.