Skip to yearly menu bar Skip to main content


Poster

Benchmarking Abstract and Reasoning Abilities Through A Theoretical Perspective

Qingchuan Ma · Yuhang Wu · Xiawu Zheng · Rongrong Ji

East Exhibition Hall A-B #E-2701
[ ] [ ] [ Project Page ]
Wed 16 Jul 4:30 p.m. PDT — 7 p.m. PDT

Abstract:

In this paper, we aim to establish a simple, effective, and theoretically grounded benchmark for rigorously probing abstract reasoning in Large Language Models (LLMs). To achieve this, we first develop a mathematic framework that defines abstract reasoning as the ability to: (i) extract essential patterns independent of surface representations, and (ii) apply consistent rules to these abstract patterns. Based on this framework, we introduce two novel complementary metrics: Γ measures basic reasoning accuracy, while ∆ quantifies a model's reliance on specific symbols rather than underlying patterns - a key indicator of true abstraction versus mere memorization. To implement this measurement, we design a benchmark: systematic symbol remapping in rule-based tasks, which forces models to demonstrate genuine pattern recognition beyond superficial token matching. Extensive LLM evaluations using this benchmark (commercial API models, 7B-70B, multi-agent) reveal:1) critical limitations in non-decimal arithmetic and symbolic reasoning; 2) persistent abstraction gaps despite chain-of-thought prompting; and 3) ∆'s effectiveness in robustly measuring memory dependence by quantifying performance degradation under symbol remapping, particularly highlighting operand-specific memorization. These findings underscore that current LLMs, despite domain-specific strengths, still lack robust abstract reasoning, highlighting key areas for future improvement.

Lay Summary:

Can advanced AI truly think abstractly, like humans? We developed a new benchmark to rigorously test this. Our method challenges AIs by systematically changing symbols in rule-based tasks (e.g., using letters for numbers in math), forcing them to understand underlying patterns rather than just memorizing specific examples. Our evaluations reveal critical limitations: current AIs struggle significantly with tasks like non-decimal arithmetic or reasoning with novel symbols, indicating a heavy reliance on memory over genuine abstraction. This work underscores that robust, flexible abstract reasoning remains a key challenge for AI, highlighting crucial areas for future improvement.

Chat is not available.