Poster
DANCE: Dual Unbiased Expansion with Group-acquired Alignment for Out-of-distribution Graph Fairness Learning
Yifan Wang · Hourun Li · Ling Yue · Zhiping Xiao · Jia Yang · Changling Zhou · Wei Ju · Ming Zhang · Xiao Luo
East Exhibition Hall A-B #E-1608
Graph neural networks (GNNs) have shown strong performance in graph fairness learning, which aims to ensure that predictions are unbiased with respect to sensitive attributes. However, existing approaches usually assume that training and test data share the same distribution, which rarely holds in the real world. To tackle this challenge, we propose a novel approach named Dual Unbiased Expansion with Group-acquired Alignment (DANCE) for graph fairness learning under distribution shifts. The core idea of our DANCE is to synthesize challenging yet unbiased virtual graph data in both graph and hidden spaces, simulating distribution shifts from a data-centric view. Specifically, we introduce the unbiased Mixup in the hidden space, prioritizing minor groups to address the potential imbalance of sensitive attributes. Simultaneously, we conduct fairness-aware adversarial learning in the graph space to focus on challenging samples and improve model robustness. To further bridge the domain gap, we propose a group-acquired alignment objective that prioritizes negative pair groups with identical sensitive labels. Additionally, a representation disentanglement objective is adopted to decorrelate sensitive attributes and target representations for enhanced fairness. Extensive experiments demonstrate the superior effectiveness of the proposed DANCE.
AI systems that learn from networked data, such as social networks or financial platforms, are increasingly used to make important decisions. However, these systems can sometimes treat people unfairly, especially when the distribution of test data differs from the training data in the real world. This often happens when data comes from different communities or changes over time. Our research introduces a new method called DANCE to help AI systems stay fair even when the distribution shifts. DANCE creates realistic training scenarios that simulate such changes, pushing the AI to learn fairer and more balanced patterns. It also teaches the system to focus on meaningful traits instead of sensitive ones like gender or race, and ensures that people from different groups are treated consistently. With DANCE, we take a step toward AI systems that are not only smarter but also fairer and more reliable in the real world.