Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Programmatic Representations for Agent Learning

Scalable Gameplay AI through Composition of LLM-Generated Heuristics

Danrui Li · Sen Zhang · Mubbasir Kapadia


Abstract:

Prototyping is a critical stage in game development, often aided by gameplay AI that simulates player behavior to support early design evaluation. Recent work has explored the use of Large Language Models (LLMs) as flexible and interpretable gameplay agents, but their high per-decision inference costs hinder scalability. We propose a program-as-policy framework that prompts an LLM to generate a diverse set of heuristic functions. These functions undergo an LLM-free selection and aggregation process to form a composite policy, eliminating the need for costly runtime inference. Applied to strategy-heavy games, our method outperforms recent LLM-based agents in both effectiveness and efficiency, enabling scalable and interpretable game prototyping.

Chat is not available.