Skip to yearly menu bar Skip to main content


Poster
in
Workshop: The 2nd Workshop on Reliable and Responsible Foundation Models

LoRA Merging with SVD: Understanding Interference and Preserving Performance

Dennis Tang · Prateek Yadav · Yi-Lin Sung · Jaehong Yoon · Mohit Bansal

Keywords: [ SVD ] [ LoRA-Merging ] [ Efficiency ] [ Model-Merging ]


Abstract:

Merging Low-Rank Adaptation (LoRA) modules is a problem gaining significance as LoRA adapters proliferate. Despite various approaches showing benchmark improvements, the field lacks clear guiding principles for effective LoRA merging. Two predominant strategies exist: direct merging (DM), which preserves a memory efficient two-matrix structure but sacrifices performance, and multiplied merging (MM), which delivers superior results but abandons the memory-efficient, low-rank architecture. In this paper, we first show that DM introduces interfering cross-terms that degrade performance, while MM exhibits linear mode connectivity in the loss landscape, making it an optimal strategy for merging. Then we demonstrate that merging with an SVD-based strategy combines MM's performance advantages with DM's memory efficiency, delivering the best of both approaches.

Chat is not available.