Skip to yearly menu bar Skip to main content


Poster
in
Workshop: ICML 2025 Workshop on Collaborative and Federated Agentic Workflows (CFAgentic @ ICML'25)

FEDTAIL: Federated Long-Tailed Domain Generalization with Sharpness-Guided Gradient Matching

Sunny Gupta · Nikita Jangid · Shounak Das · Amit Sethi


Abstract:

Domain Generalization (DG) aims to train models that can generalize to unseen target domains without requiring access to target data. While recent advances in loss landscape smoothing have improved generalization, they often struggle in scenarios with long-tailed class distributions and conflicting optimization objectives. We propose FedTAIL, a federated domain generalization framework that addresses these limitations through sharpness-guided, gradient-aligned learning. Specifically, we introduce a gradient coherence term to reduce conflicts between classification and adversarial losses, enhancing optimization stability. To better handle class imbalance, we perform class-wise sharpness minimization and propose a curvature-aware dynamic weighting scheme that adaptively emphasizes tail classes. Furthermore, we improve conditional distribution alignment by incorporating sharpness-aware perturbations into entropy regularization. Our approach unifies objective harmonization, class-aware robustness, and conditional consistency in a scalable and generalizable framework. Extensive experiments on standard domain generalization benchmarks demonstrate that FedTAIL achieves state-of-the-art performance, particularly under domain shift and label imbalance, validating its effectiveness in both centralized and federated learning settings. Our code is publicly available at: https://github.com/sunnyinAI/FedTail

Chat is not available.