Poster
in
Workshop: Multi-Agent Systems in the Era of Foundation Models: Opportunities, Challenges and Futures
Data-efficient Multi-agent Spatial Planning with LLMs
Huangyuan Su · Aaron Walsman · Daniel Garces · Sham Kakade · Stephanie Gil
In this project, our goal is to determine how to leverage the world-knowledge of pretrained large language models for efficient and robust learning in multiagent decision making. We examine this in a taxi routing and assignment problem where agents must decide how to best pick up passengers in order to minimize overall waiting time. While this problem is situated on a graphical road network, we show that with the proper prompting, zero-shot performance is quite strong on this task. Furthermore, with limited fine-tuning along with the one-at-a-time rollout algorithm for policy improvement, LLMs can out-compete existing approaches with 50 times fewer environmental interactions. We also explore various linguistic prompting approaches and show that including certain information easily computable from the environment significantly improves performance. Finally, we highlight the LLM’s built-in semantic understanding, showing its ability to adapt to environmental factors through simple prompts.