Skip to yearly menu bar Skip to main content


Poster

FedPHA: Federated Prompt Learning for Heterogeneous Client Adaptation

Chengying Fang · Wenke Huang · Guancheng Wan · Yihao Yang · Mang Ye

East Exhibition Hall A-B #E-2811
[ ] [ ] [ Project Page ]
Thu 17 Jul 4:30 p.m. PDT — 7 p.m. PDT

Abstract:

Federated Prompt Learning (FPL) adapts pre-trained Vision-Language Models (VLMs) to federated learning through prompt tuning, leveraging their transferable representations and strong generalization capabilities. Traditional methods often require uniform prompt lengths for federated aggregation, limiting adaptability to clients with diverse prompt lengths and distribution biases. In this paper, we propose Federated Prompt Learning for Heterogeneous Client Adaptation (FedPHA), a novel framework that combines a fixed-length global prompt for efficient aggregation with local prompts of varying lengths to capture client-specific data characteristics. Additionally, FedPHA designs Singular Value Decomposition (SVD) based projection and bidirectional alignment to disentangle global conflicts arising from client heterogeneity, ensuring that personalized client tasks effectively utilize non-harmful global knowledge. This approach ensures that global knowledge improves model generalization while local knowledge preserves local optimization. Experimental results validate the effectiveness of FedPHA in achieving a balance between global and personalized knowledge in federated learning scenarios.

Lay Summary:

Federated learning allows devices like smartphones, hospitals, or schools to train AI models together without sharing sensitive data. But a key challenge is that every device has different amounts and types of data, and forcing them to follow the same learning process can lead to poor performance.Our work introduces FedPHA, a method designed to address this problem by balancing shared learning with personalization. It gives each device a common "global prompt" to ensure consistent collaboration, while also allowing it to have its own "local prompt" that better fits its unique data. We also designed a mathematical technique to ensure these global and local prompts don’t conflict, helping devices learn effectively from both shared and personalized knowledge.This approach allows AI systems to benefit from diverse data sources, improving overall learning while ensuring that each device’s unique needs are respected.

Chat is not available.