Skip to yearly menu bar Skip to main content


Poster
in
Workshop: The Impact of Memorization on Trustworthy Foundation Models

Low-Rank Adaptation Secretly Imitates Differentially Private SGD

Saber Malekmohammadi · Golnoosh Farnadi

[ ] [ Project Page ]
Sat 19 Jul 8:30 a.m. PDT — 9:30 a.m. PDT

Abstract:

We look at low-rank adaptation methods, e.g., LoRA, from the lens of data privacy. We show theoretically that the low-rank adaptation used in LoRA leads to injection of random noise into the batch gradients w.r.t the adapters - just like what DPSGD algorithm does. We also quantify the variance of the injected noise as a decreasing function of adaptation rank. By establishing a Berry-Esseen type bound on the total variation distance between distribution of the injected noise and a Gaussian distribution with the same variance, we show that the dynamics of low-rank adaptation is very close to that when performing DPSGD w.r.t the adapters. Consequently, low-rank adaptation provides robustness to membership inference attacks.

Chat is not available.