Skip to yearly menu bar Skip to main content


Poster
in
Workshop: ES-FoMo III: 3rd Workshop on Efficient Systems for Foundation Models

Fed-SB: A Silver Bullet for Extreme Communication Efficiency and Performance in (Private) Federated LoRA Fine-Tuning

Raghav Singhal · Kaustubh Ponkshe · Rohit Vartak · Lav Varshney · Praneeth Vepakomma


Abstract: Low-Rank Adaptation (LoRA) is widely used for efficient fine-tuning, but federated settings pose challenges due to suboptimal adapter averaging. We propose **Federated Silver Bullet (Fed-SB)**, a scalable and communication-efficient method for federated fine-tuning based on LoRA-SB, which introduces a small learnable matrix $R$ between frozen adapters. By directly averaging $R$, Fed-SB enables exact aggregation and decouples communication cost from the number of clients. It achieves **state-of-the-art performance** across commonsense reasoning, arithmetic reasoning, and language inference tasks while reducing communication costs by up to **230x**. Fed-SB is especially well-suited for private settings, reducing trainable parameters and avoiding noise amplification. Our code is available at: https://github.com/CERT-Lab/fed-sb.

Chat is not available.