Poster
in
Workshop: 1st Workshop on Foundation Models for Structured Data (FMSD)
Towards Fair In-Context Learning with Tabular Foundation Models
Patrik Kenfack · Samira Ebrahimi Kahou · Ulrich Aïvodji
Tabular foundational models have shown promising in-context learning capabilities on structured data by using training examples as context without further parameter adjustments. This emerging approach positions itself as a competitive alternative to traditional gradient-boosted tree methods. However, while biases in conventional machine learning models are well documented, it remains unclear how these biases manifest in Tabular ICL. The paper investigates the fairness implications of Tabular ICL and explores three preprocessing strategies—correlation removal, group-balanced demonstration selection, and uncertainty-based demonstration selection—to address bias. Comprehensive experiments indicate that uncertainty-based demonstration selection consistently enhances group fairness in the predictions. The source code for reproducing the results of this work can be found at https://anonymous.4open.science/r/Fair-TabICL-DD84.