Skip to yearly menu bar Skip to main content


Poster

Larger or Smaller Reward Margins to Select Preferences for LLM Alignment?

Kexin Huang · Junkang Wu · Ziqian Chen · xue wang · Jinyang Gao · Bolin Ding · Jiancan Wu · Xiangnan He · Xiang Wang

East Exhibition Hall A-B #E-3311
[ ] [ ]
Thu 17 Jul 4:30 p.m. PDT — 7 p.m. PDT

Abstract: Preference learning is critical for aligning large language models (LLMs) with human values, with the quality of preference datasets playing a crucial role in this process. While existing metrics primarily assess data quality based on either *explicit* or *implicit* reward margins, their single-margin focus often leads to contradictory evaluations for the same data.To address this issue, we propose a new metric of *alignment potential*, $M_{AP}$, which integrates both margins to quantifythe gap from the model's *current implicit* reward margin to the *target explicit* reward margin, thereby estimating the model's potential to align on the preference data.Empirical results demonstrate that training on the data selected by $M_{AP}$ consistently enhances alignment performance, surpassing existing metrics across different base models and optimization objectives.Furthermore, our method can be extended to self-play data generation frameworks, where we use this metric to identify high-quality data within the self-generated content by LLMs. Under this data generation scenario, our method surpasses current state-of-the-artmethods across various training settings and demonstrates continuous improvementswith increasing dataset size and training iterations.

Lay Summary:

When teaching AI language models to understand human preferences, researchers face a challenge: existing methods for evaluating training data quality often provide conflicting assessments, making it difficult to select the most effective data for training. Our research introduces a new measurement approach that bridges this gap by considering both what the AI system currently understands and what we want it to learn. This helps us identify which training examples will be most valuable for teaching the AI to better align with human values. Experiments show that training examples selected by our method consistently outperform existing metrics under various training settings, enabling more efficient training of AI systems that better understand and respect human preferences.

Chat is not available.