Skip to yearly menu bar Skip to main content


Poster
in
Workshop: DIG-BUGS: Data in Generative Models (The Bad, the Ugly, and the Greats)

Optimal Defenses Against Data Reconstruction Attacks

Yuxiao Chen · Gamze Gursoy · Qi Lei

Keywords: [ Federated learning ] [ Bayesian C-R lower bound ] [ Data reconstruction ]

[ ] [ Project Page ]
Sat 19 Jul 3 p.m. PDT — 3:45 p.m. PDT

Abstract:

Federated Learning (FL) is designed to preventdata leakage through collaborative model train-ing without centralized data storage. However,it is vulnerable to reconstruction attacks that re-cover original training data from shared gradients.To optimize the trade-off between data leakageand utility loss, we first derive a theoretical lowerbound of reconstruction error (among all attack-ers) for the two standard methods: adding noise,and gradient pruning. We then customize thesetwo defenses to be parameter- and model-specificand achieve the optimal trade-off between our ob-tained reconstruction lower bound and model util-ity. Experimental results validate that our methodsoutperform Gradient Noise and Gradient Pruningby protecting the training data better while alsoachieving better utility.

Chat is not available.