Poster
Diverging Preferences: When do Annotators Disagree and do Models Know?
Michael Zhang · Zhilin Wang · Jena Hwang · Yi Dong · Olivier Delalleau · Yejin Choi · Eunsol Choi · Xiang Ren · Valentina Pyatkin
East Exhibition Hall A-B #E-2705
We examine diverging preferences in human-labeled preference datasets. We develop a taxonomy of disagreement sources spanning ten categories across four high-level classes and find that the majority of disagreements are due to factors such as task underspecification or response style. Our findings challenge a standard assumption in reward modeling methods that annotator disagreements can be attributed to simple noise. We then explore how these findings impact two areas of LLM development: reward modeling training and evaluation. In our experiments, we demonstrate how standard reward modeling (e.g., Bradley-Terry) and LLM-as-Judge evaluation methods fail to account for divergence between annotators. These findings highlight challenges in LLM evaluations, which are greatly influenced by divisive features like response style, and in developing pluralistically aligned LLMs. To address these issues, we develop methods for identifying diverging preferences to mitigate their influence in evaluations and during LLM training.
We explore user disagreements in the preferred responses from LLMs. We analyze what factors lead to disagreement majority of disagreements are due to factors such as task underspecification or response style. We then examine how disagreements are handled in existing LLM training and evaluation methods, finding that standard methods incentivize LLMs to decisively prefer one response even when users disagree. We then propose methods for mitigating these behaviors in LLM training and evaluation.