Skip to yearly menu bar Skip to main content


Poster
in
Workshop: 2nd AI for Math Workshop @ ICML 2025

Widening the Mathematical Search Space with Abstraction‑Encouraging Prompts

Shervin Ardeshir


Abstract:

A core step in automated discovery and agentic ML research is generating diverse mathematical functions (hypotheses), to try to solve varied problems. Large language models (LLMs) are natural tools for this task, but often regurgitate familiar patterns, especially when prompted with explicit references to known roles (e.g., 'activation function') or frameworks (e.g., PyTorch). Such inductive biases can collapse the functional search space and hinder exploration. Here we investigate how prompt phrasing induces domain-specific and platform-specific inductive biases in function generation. We compare four prompting styles—ranging from explicit to fully abstract—across three LLMs, generating 12,000 scalar-to-scalar functions. Our analysis quantifies shifts in mathematical characteristics and operator diversity, revealing how seemingly minor prompt differences can significantly alter the space of functions explored.

Chat is not available.