Skip to yearly menu bar Skip to main content


Poster

DMM: Distributed Matrix Mechanism for Differentially-Private Federated Learning Based on Constant-Overhead Linear Secret Resharing

Alexander Bienstock · Ujjwal Kumar · Antigoni Polychroniadou

East Exhibition Hall A-B #E-901
[ ] [ ]
Wed 16 Jul 11 a.m. PDT — 1:30 p.m. PDT

Abstract:

Federated Learning (FL) solutions with central Differential Privacy (DP) have seen large improvements in their utility in recent years arising from the matrix mechanism, while FL solutions with distributed (more private) DP have lagged behind. In this work, we introduce the distributed matrix mechanism to achieve the best-of-both-worlds; better privacy of distributed DP and better utility from the matrix mechanism. We accomplish this using a novel cryptographic protocol that securely transfers sensitive values across client committeesof different training iterations with constant communication overhead. This protocol accommodates the dynamic participation of users required by FL, including those that may drop out from the computation. We provide experiments which show that our mechanism indeed significantly improves the utility of FL models compared to previous distributed DP mechanisms, with little added overhead.

Lay Summary: In Federated Learning (FL), a machine learning model is trained using data from several end-users. Since such data can often be sensitive, a key challenge in FL is maintaining utility of the trained models, while preserving privacy of users. The main privacy metric for FL is differential privacy (DP). Roughly speaking, DP guarantees that with high probability, one cannot tell whether a user’s data was used for training a model. In $\textit{central}$ DP, privacy holds with respect to other users in the system, but not the central service provider training the model. In $\textit{distributed}$ DP, privacy holds with respect to the service provider as well.FL with central DP uses the $\textit{matrix mechanism}$ to achieve excellent privacy-utility trade-offs. Previously, this mechanism could not be applied to distributed DP and, so, the utility of distributed DP paled in comparison to that of central DP.In this work, we propose a solution to achieve the "best-of-both-worlds" of the central and distributed DP settings, called the $\textit{Distributed Matrix Mechanism}$ (DMM).We achieve privacy against the service provider, i.e., distributed DP, while using the matrix mechanism to obtain the same utility as the central DP setting.

Chat is not available.