Skip to yearly menu bar Skip to main content


Poster
in
Workshop: 2nd AI for Math Workshop @ ICML 2025

IntegralBench: Benchmarking LLMs with Definite Integral Problems

Bintao Tang · Xin Yang · Yuhao Wang · Zixuan Qiu · Zimo Ji · Wenyuan Jiang


Abstract:

We present IntegralBench, a focused benchmark designed to evaluate Large Language Model (LLM) performance on definite integral problems.IntegralBench provides both symbolic and numerical ground truth solutions with manual difficulty annotations. Our evaluation of nine state-of-the-art LLMs reveals significant performance gaps and strong correlations between problem difficulty and model accuracy, establishing baseline metrics for this challenging domain. IntegralBench aims to advance automated mathematical reasoning by providing a rigorous evaluation framework specifically tailored for definite integral computation.

Chat is not available.