Poster
FACTER: Fairness-Aware Conformal Thresholding and Prompt Engineering for Enabling Fair LLM-Based Recommender Systems
Arya Fayyazi · Mehdi Kamal · Massoud Pedram
East Exhibition Hall A-B #E-1101
We propose FACTER, a fairness-aware framework for LLM-based recommendation systems that integrates conformal prediction with dynamic prompt engineering. By introducing an adaptive semantic variance threshold and a violation-triggered mechanism, FACTER automatically tightens fairness constraints whenever biased patterns emerge. We further develop an adversarial prompt generator that leverages historical violations to reduce repeated demographic biases without retraining the LLM. Empirical results on MovieLens and Amazon show that FACTER substantially reduces fairness violations (up to 95.5%) while maintaining strong recommendation accuracy, revealing semantic variance as a potent proxy of bias.
Modern AI systems, like large language models, are widely used in recommendations and decision-making but can sometimes produce biased or unfair results, such as favoring certain age groups or genders. Our work introduces FACTER, a method that detects and corrects such unfair behavior without retraining the model. By statistically monitoring model outputs and updating future prompts based on prior unfair responses, FACTER helps ensure more equitable outcomes while preserving model accuracy. It is efficient, easy to apply, and improves fairness across diverse real-world applications.