Skip to yearly menu bar Skip to main content


Poster

Optimizing Language Models for Inference Time Objectives using Reinforcement Learning

Yunhao Tang · Kunhao Zheng · Gabriel Synnaeve · REMI MUNOS

West Exhibition Hall B2-B3 #W-718
[ ] [ ]
Thu 17 Jul 4:30 p.m. PDT — 7 p.m. PDT

Abstract: In this work, we investigate the merits of explicitly optimizing for inference time algorithmic performance during model training. We show how optimizing for inference time performance can improve overall model efficacy. We consider generic inference time objectives with $k$ samples, with focus on pass@$k$ and majority voting as two main applications. With language model training on reasoning datasets, we showcase the performance trade-off enabled by training with such objectives. When training on code generation tasks, we show that the approach significantly improves pass@$k$ objectives compared to the baseline method.

Lay Summary:

Learning to do better inference time computation with Reinforcement Learning

Chat is not available.