Skip to yearly menu bar Skip to main content


Poster

One Example Shown, Many Concepts Known! Counterexample-Driven Conceptual Reasoning in Mathematical LLMs

Yinghui Li · Jiayi Kuang · Haojing Huang · Zhikun Xu · Xinnian Liang · Yi Yu · Wenlian Lu · Yangning Li · Xiaoyu Tan · Chao Qu · Ying Shen · Hai-Tao Zheng · Philip Yu

East Exhibition Hall A-B #E-1709
[ ] [ ] [ Project Page ]
Tue 15 Jul 4:30 p.m. PDT — 7 p.m. PDT

Abstract:

Leveraging mathematical Large Language Models (LLMs) for proof generation is a fundamental topic in LLMs research. We argue that the ability of current LLMs to prove statements largely depends on whether they have encountered the relevant proof process during training. This reliance limits their deeper understanding of mathematical theorems and related concepts. Inspired by the pedagogical method of "proof by counterexamples" commonly used in human mathematics education, our work aims to enhance LLMs’ ability to conduct mathematical reasoning and proof through counterexamples. Specifically, we manually create a high-quality, university-level mathematical benchmark, COUNTERMATH, which requires LLMs to prove mathematical statements by providing counterexamples, thereby assessing their grasp of mathematical concepts. Additionally, we develop a data engineering framework to automatically obtain training data for further model improvement. Extensive experiments and detailed analyses demonstrate that COUNTERMATH is challenging, indicating that LLMs, such as OpenAI o1, have insufficient counterexample-driven proof capabilities. Moreover, our exploration into model training reveals that strengthening LLMs' counterexample-driven conceptual reasoning abilities is crucial for improving their overall mathematical capabilities. We believe that our work offers new perspectives on the community of mathematical LLMs.

Lay Summary:

This paper focuses on enhancing the mathematical reasoning capabilities of Large Language Models (LLMs) by addressing their reliance on "drill-based learning" (memorizing proof patterns from training data), which limits their deep understanding of mathematical concepts. Inspired by how humans use counterexamples to learn theorems (e.g., identifying exceptions to test the validity of statements), the researchers developed COUNTERMATH, a benchmark that evaluates LLMs' ability to prove or disprove mathematical statements using counterexamples. This work highlights the need for LLMs to move beyond memorization and develop deeper conceptual understanding, with COUNTERMATH serving as a critical tool to measure this progress. The findings suggest that integrating counterexample-based reasoning into LLM training could unlock more human-like mathematical thinking, benefiting fields like theorem proving and academic research.

Chat is not available.