Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Methods and Opportunities at Small Scale (MOSS)

Improving Pathfinding with Anchoring Tokens

Huaqing Zhang · Bingbin Liu · Juno Kim · Andrej Risteski

Keywords: [ next-token prediction ] [ planning ] [ path-finding ]


Abstract:

Planning is a critical aspect of multi-step reasoning, yet it remains challenging for large language models (LLMs).In this work, we use pathfinding in graphs as a sandbox for understanding and improving the planning abilities of LLMs.Our results show that while conventional autoregressive training generalizes poorly, an anchoring strategy, whereby a model first predicts a small subset of intermediate nodes along the path, significantly improves the path finding performance.We confirm these gains on two families of graphs with markedly different structures and provide preliminary heuristics for selecting effective anchor nodes, offering guidance for more realistic settings.

Chat is not available.