Poster
Improving Parallel Program Performance with LLM Optimizers via Agent-System Interfaces
Anjiang Wei · Allen Nie · Thiago Teixeira · Rohan Yadav · Wonchan Lee · Ke Wang · Alex Aiken
East Exhibition Hall A-B #E-2410
Can large language models (LLMs) make programs run faster on supercomputers?In this work, we show that they can—by designing a high-level interface that connects an LLM-powered agent with low-level system software. This interface allows LLMs to generate and iteratively refine the high-level programs that control key performance aspects of program execution, without modifying the complex underlying system code. The challenge lies in quickly discovering such effective high-level programs. To address this, we introduce a natural language–based guidance mechanism that interprets execution feedback and helps the LLM improve more efficiently. Our results show that this approach is significantly faster and more effective than traditional reinforcement learning methods.Overall, our work suggests that LLMs could play a major role in solving performance optimization challenges in computer systems.