Skip to yearly menu bar Skip to main content


Spotlight
in
Workshop: ES-FoMo III: 3rd Workshop on Efficient Systems for Foundation Models

ABBA: Highly Expressive Hadamard Product Adaptation for Large Language Models

Raghav Singhal · Kaustubh Ponkshe · Rohit Vartak · Praneeth Vepakomma


Abstract:

Large Language Models (LLMs) demonstrate strong performance across a variety of tasks, yet adapting them efficiently to new domains remains a challenge. Parameter-Efficient Fine-Tuning (PEFT) mitigates this by introducing lightweight, trainable modules while keeping most pre-trained weights frozen. We introduce ABBA, a new PEFT approach that models updates as a Hadamard product of two independently learnable low-rank matrices, fully decoupled from the pre-trained weights. This reparameterization significantly enhances expressivity under fixed parameter budgets. We provide a formal analysis of ABBA’s expressive capacity and demonstrate that it consistently outperforms existing PEFT methods on arithmetic and commonsense reasoning benchmarks across multiple models by a significant margin. Our code is available at: https://github.com/CERT-Lab/abba.

Chat is not available.