Poster
in
Workshop: The 2nd Workshop on Reliable and Responsible Foundation Models
Position: Reasoning LLMs are Wandering Solution Explorers
Jiahao Lu · Ziwei Xu · Mohan Kankanhalli
Keywords: [ Test-Time Compute ] [ Systematic Solution Exploration ] [ Reasoning LLMs ] [ Reliable Foundation Models ]
Large Language Models (LLMs) have demonstrated impressive reasoning abilities through test-time computation (TTC) techniques such as chain-of-thought prompting and tree-based reasoning. However, we argue that current reasoning LLMs (RLLMs) lack the ability to systematically explore the solution space. This paper formalizes what constitutes systematic problem solving and identifies common failure modes that reveal reasoning LLMs to be wanderers rather than systematic explorers. Through qualitative and quantitative analysis across multiple state-of-the-art LLMs, we uncover persistent issues: invalid reasoning steps, redundant explorations, hallucinated or unfaithful conclusions, and so on. Our findings suggest that current models' performance can appear to be competent on simple tasks yet degrade sharply as complexity increases. Based on the findings, we advocate for new metrics and tools that evaluate not just final outputs but the structure of the reasoning process itself.