Skip to yearly menu bar Skip to main content


Poster

On the Query Complexity of Verifier-Assisted Language Generation

Edoardo Botta · Yuchen Li · Aashay Mehta · Jordan Ash · Cyril Zhang · Andrej Risteski

West Exhibition Hall B2-B3 #W-120
[ ] [ ]
Tue 15 Jul 4:30 p.m. PDT — 7 p.m. PDT

Abstract:

Recently, a plethora of works have proposedinference-time algorithms (e.g. best-of-n), whichincorporate verifiers to assist the generation process. Their quality-efficiency trade-offs have beenempirically benchmarked on a variety of constrained generation tasks, but the algorithmic design landscape is still largely poorly understood.In this paper, we develop a mathematical framework for reasoning about constrained generationusing a pre-trained language model generator oracle and a process verifier—which can decidewhether a prefix can be extended to a string whichsatisfies the constraints of choice. We show thateven in very simple settings, access to a verifiercan render an intractable problem (information-theoretically or computationally) to a tractableone. In fact, we show even simple algorithms,like tokenwise rejection sampling, can enjoy significant benefits from access to a verifier. Empirically, we show that a natural modification oftokenwise rejection sampling, in which the sampler is allowed to "backtrack" (i.e., erase the finalfew generated tokens) has robust and substantivebenefits over natural baselines (e.g. (blockwise)rejection sampling, nucleus sampling)—both interms of computational efficiency, accuracy anddiversity.

Lay Summary:

In this paper, we develop a mathematical framework for reasoning about constrained generation using a pre-trained language model generator oracle and a process verifier--which can decide whether a prefix can be extended to a string which satisfies the constraints of choice. We show that even in very simple settings, access to a verifier can render an intractable problem (information-theoretically or computationally) to a tractable one. In fact, we show even simple algorithms, like tokenwise rejection sampling, can enjoy significant benefits from access to a verifier. Empirically, we show that a natural modification of tokenwise rejection sampling, in which the sampler is allowed to "backtrack" (i.e., erase the final few generated tokens) has robust and substantive benefits over natural baselines (e.g. (blockwise) rejection sampling, nucleus sampling)--both in terms of computational efficiency, accuracy and diversity.

Chat is not available.