Skip to yearly menu bar Skip to main content


Oral

VersaPRM: Multi-Domain Process Reward Model via Synthetic Reasoning Data

Thomas Zeng · Shuibai Zhang · Shutong Wu · Christian Classen · Daewon Chae · Ethan Ewer · Minjae Lee · Heeju Kim · Wonjun Kang · Jackson Kunde · Ying Fan · Jungtaek Kim · HYUNG IL KOO · Kannan Ramchandran · Dimitris Papailiopoulos · Kangwook Lee

West Exhibition Hall C
[ ] [ Visit Oral 3A Reasoning ]
Wed 16 Jul 10:45 a.m. — 11 a.m. PDT

Abstract:

Process Reward Models (PRMs) have proven effective at enhancing mathematical reasoning for Large Language Models (LLMs) by leveraging increased inference-time computation. However, they are predominantly trained on mathematical data and their generalizability to non-mathematical domains has not been rigorously studied. In response, this work first shows that current PRMs have poor performance in other domains. To address this limitation, we introduce VersaPRM, a multi-domain PRM trained on synthetic reasoning data generated using our novel data generation and annotation method. VersaPRM achieves consistent performance gains across diverse domains. For instance, in the MMLU-Pro category of Law, VersaPRM via weighted majority voting, achieves a 7.9% performance gain over the majority voting baseline–surpassing Qwen2.5-Math-PRM's gain of 1.3%. We further contribute to the community by open-sourcing all data, code and models for VersaPRM.

Chat is not available.