Spotlight Poster
Multi-agent Architecture Search via Agentic Supernet
Guibin Zhang · Luyang Niu · Junfeng Fang · Kun Wang · LEI BAI · Xiang Wang
East Exhibition Hall A-B #E-2812
[
Abstract
]
[
Lay Summary
]
Oral
presentation:
Oral 1A Alignment and Agents
Tue 15 Jul 10 a.m. PDT — 11 a.m. PDT
[
OpenReview]
Tue 15 Jul 11 a.m. PDT
— 1:30 p.m. PDT
Tue 15 Jul 10 a.m. PDT — 11 a.m. PDT
Abstract:
Large Language Model (LLM)-empowered multi-agent systems extend the cognitive boundaries of individual agents through disciplined collaboration and interaction, while constructing these systems often requires labor-intensive manual designs. Despite the availability of methods to automate the design of agentic workflows, they typically seek to identify a static, complex, one-size-fits-all system, which, however, fails to dynamically allocate inference resources based on the difficulty and domain of each query. To address this challenge, we shift away from the pursuit of a monolithic agentic system, instead optimizing the \textbf{agentic supernet}, a probabilistic and continuous distribution of agentic architectures. We introduce \textbf{MaAS}, an automated framework that samples query-dependent agentic systems from the supernet, delivering high-quality solutions and tailored resource allocation (\textit{e.g.}, LLM calls, tool calls, token cost). Comprehensive evaluation across six benchmarks demonstrates that MaAS \textbf{(I)} requires only $6\\sim45\\%$ of the inference costs of existing handcrafted or automated multi-agent systems, \textbf{(II)} surpasses them by $0.54\\%\sim11.82\\%$, and \textbf{(III)} enjoys superior cross-dataset and cross-LLM-backbone transferability.
Lay Summary:
MaAS extends traditional neural architecture search into the agentic AI domain, introducing the first agentic supernet that dynamically adjusts its complexity based on task demands.
Chat is not available.