Poster
On the Resilience of LLM-Based Multi-Agent Collaboration with Faulty Agents
Jen-Tse Huang · Jiaxu Zhou · Tailin Jin · Xuhui Zhou · Zixi Chen · Wenxuan Wang · Youliang Yuan · Michael Lyu · Maarten Sap
East Exhibition Hall A-B #E-1101
Teams of AI “agents” built on large language models can solve coding, maths and translation tasks, but one careless—or malicious—agent can poison the discussion and drag down the whole team. We create two automated stress-tests: AutoTransform, which rewrites an agent’s role so it secretly adds mistakes, and AutoInject, which slips errors directly into its messages. Using them, we explore how different multi-agent structures (linear chains, flat peer groups and human-like hierarchies) and different tasks suffer from these faulty agents. A hierarchical structure—one “boss” overseeing peer agents—proved most robust, losing only ≈ 5% accuracy, while a simple chain collapsed by ≈ 24%. Adding two simple safeguards—a “Challenger” ability that lets agents question each other and an independent “Inspector” reviewer—recovered up to 96% of the lost performance. Our open-source toolkit lets researchers and companies quickly gauge and harden the resilience of their AI agents before deploying in the wild.