Skip to yearly menu bar Skip to main content


Poster
in
Workshop: 2nd AI for Math Workshop @ ICML 2025

A Compute-Matched Re-Evaluation of TroVE on MATH

Tobias Sesterhenn · Ian Berlot-Attwell · Janis Zenkner · Christian Bartelt


Abstract:

Reusing established theorems and formulas is central to mathematical problem solving, serving as essential building blocks for tackling increasingly complex challenges.Recent work, \textsc{TroVE}, argues that code-generating Large Language Models (LLMs) can benefit similarly on the \textsc{MATH} benchmark by inducing and reusing higher-level toolboxes. By allocating computational budget across an ensemble of three modes -- directly generating code, creating tools, and reusing tools -- \textsc{TroVE} claims to outperform a \PRIMITIVE{} baseline that only performs direct generation. However, recent analysis~\cite{berlot2024library} casts doubt on these gains, noting that the tools created are often trivial or rarely reused, suggesting that improvements may stem from self-consistency or self-correction.In this work, we re-evaluate \TROVE{} on \textsc{MATH}, analyze the impact of each of its modes, and show that its benefit does not come from these mechanisms, but simply from a higher computational budget spent for \TROVE{} compared to \PRIMITIVE{}. To this end, we also perform a small correction in the original implementation of \TROVE{}’s selection mechanism, boosting \TROVE{}’s performance on \textsc{MATH} by 3% in accuracy.After matching for compute, the benefit of \TROVE{} reduces to a marginal improvement of 1%, suggesting that this toolbox approach does not provide a significant benefit on \textsc{MATH}.

Chat is not available.