Skip to yearly menu bar Skip to main content


Poster

You Get What You Give: Reciprocally Fair Federated Learning

Aniket Murhekar · Jiaxin Song · Parnian Shahkar · Bhaskar Ray Chaudhury · Ruta Mehta

East Exhibition Hall A-B #E-1100
[ ] [ ]
Wed 16 Jul 4:30 p.m. PDT — 7 p.m. PDT

Abstract: Federated learning (FL) is a popular collaborative learning paradigm, whereby agents with individual datasets can jointly train an ML model. While higher data sharing improves model accuracy and leads to higher payoffs, it also raises costs associated with data acquisition or loss of privacy, causing agents to be strategic about their data contribution. This leads to undesirable behavior at a Nash equilibrium (NE) such as *free-riding*, resulting in sub-optimal fairness, data sharing, and welfare.To address this, we design $\mathcal{M}^{Shap}$, a budget-balanced payment mechanism for FL, that admits Nash equilibria under mild conditions, and achieves *reciprocal fairness*: where each agent's payoff equals her contribution to the collaboration, as measured by the Shapley share. In addition to fairness, we show that the NE under $\mathcal{M}^{Shap}$ has desirable guarantees in terms of accuracy, welfare, and total data collected.We validate our theoretical results through experiments, demonstrating that $\mathcal{M}^{Shap}$ outperforms baselines in terms of fairness and efficiency.

Lay Summary:

(1) Federated learning (FL) is a popular collaborative learning paradigm, but the cost of data sharing causes agents to be strategic about their data contribution. (2) We design MShap -- a budget-balanced payment mechanism for FL that admits Nash equilibria under mild conditions. (3) Our mechanism achieves reciprocal fairness and also has desirable guarantees in terms of accuracy, welfare, and total data collected. The theoretical results are validated on real-world datasets.

Chat is not available.