Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Tiny Titans: The next wave of On-Device Learning for Foundation Models (TTODLer-FM)

Predictive Scheduling for Efficient Inference-Time Reasoning in Large Language Models

Katrina Brown · Aneesh Muppidi · Rana Shahout

[ ] [ Project Page ]
Fri 18 Jul 3 p.m. PDT — 3:45 p.m. PDT

Abstract:

Large language models (LLMs) achieve state-of-the-art accuracy on complex reasoning tasks by generating multiple chain-of-thought (CoT) traces, but using a fixed token budget per query leads to over-computation on easy inputs and under-computation on hard ones. We introduce Predictive Scheduling, a plug-and-play framework that pre-runs lightweight predictors—an MLP on intermediate transformer hidden states or a LoRA-fine-tuned classifier on raw question text—to estimate each query’s optimal reasoning length or difficulty before any full generation. Our greedy batch allocator dynamically distributes a fixed total token budget across queries to maximize expected accuracy. On the GSM8K arithmetic benchmark, predictive scheduling yields up to 7.9 percentage points of absolute accuracy gain over uniform budgeting at identical token cost, closing over 50\% of the gap to an oracle with perfect foresight. A systematic layer-wise study reveals that middle layers (12–17) of the transformer carry the richest signals for size estimation. These results demonstrate that pre-run budget prediction enables fine-grained control of the compute–accuracy trade-off, offering a concrete path toward latency-sensitive, cost-efficient LLM deployments.

Chat is not available.