Skip to yearly menu bar Skip to main content


Oral
in
Workshop: 2nd Workshop on Models of Human Feedback for AI Alignment (MoFA)

Aligned Textual Scoring Rule

Yuxuan Lu · Yifan Wu · Jason Hartline · Michael Curry

[ ] [ Project Page ]
Fri 18 Jul 9:40 a.m. PDT — 9:55 a.m. PDT

Abstract:

Scoring rules elicit probabilistic predictions from a strategic agent by scoring the prediction against a ground truth state. A scoring rule is \emph{proper} if, from the agent's perspective, reporting the true belief maximizes the expected score. With the development of language models, Wu &Hartline (2024) proposes a reduction from textual information elicitation to the numerical (i.e.\ probabilistic) information elicitation problem, which achieves provable properness for textual elicitation. However, not all proper scoring rules align with human preference over text. Our paper designs the Human-Aligned Scoring rule (HASR) for text by optimizing and minimizing the mean squared error between a proper scoring rule and a reference score (e.g.\ human score). Our experiments show that our HASR outperforms previous methods in aligning with human preference while preserving the same properness.

Chat is not available.