Poster
in
Workshop: Actionable Interpretability
MPF: Aligning and Debiasing Language Models post Deployment via Multi-Perspective Fusion
Xin Guan · Pei-Hsin Lin · Zekun Wu · Ze Wang · Ruibo Zhang · Emre Kazim · Adriano Koshiyama
Multi-Perspective Fusion (MPF) is a novel post-training alignment framework for large language models (LLMs) developed in response to the growing need for easy bias mitigation. Built on top of the SAGED pipeline—an automated system for constructing bias benchmarks and extracting interpretable baseline distributions—MPF leverages multi-perspective generations to expose and align biases in LLM outputs with nuanced, human-like baselines. By decomposing baseline —such as sentiment distributions from HR professionals—into interpretable perspective components, MPF guides generation through sampling and balancing of responses, weighted by the probabilities obtained in the decomposition. Empirically, we demonstrate its ability to align LLM sentiment distributions with both counterfactual baselines (absolute equality) and the HR baseline (biased for Top Uni.), resulting in small KL divergence, reduction of calibration error and generalization to unseen questions. This shows that MPF offers a scalable and interpretable method for alignment and bias mitigation, compatible with deployed LLMs and requiring no extensive prompt engineering or fine-tuning.