Skip to yearly menu bar Skip to main content


Poster

Does One-shot Give the Best Shot? Mitigating Model Inconsistency in One-shot Federated Learning

Hui Zeng · Wenke Huang · Tongqing Zhou · Xinyi Wu · Guancheng Wan · Yingwen Chen · CAI ZHIPING

East Exhibition Hall A-B #E-2012
[ ] [ ] [ Project Page ]
Tue 15 Jul 4:30 p.m. PDT — 7 p.m. PDT

Abstract:

Turning the multi-round vanilla Federated Learning into one-shot FL (OFL) significantly reduces the communication burden and makes a big leap toward practical deployment. However, this work empirically and theoretically unravels that existing OFL falls into a garbage (inconsistent one-shot local models) in and garbage (degraded global model) out pitfall. The inconsistency manifests as divergent feature representations and sample predictions. This work presents a novel OFL framework FAFI that enhances the one-shot training on the client side to essentially overcome inferior local uploading. Specifically, unsupervised feature alignment and category-wise prototype learning are adopted for clients' local training to be consistent in representing local samples. On this basis, FAFI uses informativeness-aware feature fusion and prototype aggregation for global inference. Extensive experiments on three datasets demonstrate the effectiveness of FAFI, which facilitates superior performance compared with 11 OFL baselines (+10.86% accuracy). Code available at https://github.com/zenghui9977/FAFI_ICML25

Lay Summary:

One-shot federated learning (OFL) cuts communication costs by training models in just one round, but local models often present large inconsistencies due to heterogeneous data, harming global performance. This work proposes FAFI, a method that aligns local training to reduce inconsistencies and informative feature fusion during aggregation. Evaluations on three datasets show FAFI boosts accuracy by 10.86% over existing approaches, offering a practical solution for Federated Learning.

Chat is not available.