Skip to yearly menu bar Skip to main content


Poster

On the Resilience of LLM-Based Multi-Agent Collaboration with Faulty Agents

Jen-Tse Huang · Jiaxu Zhou · Tailin Jin · Xuhui Zhou · Zixi Chen · Wenxuan Wang · Youliang Yuan · Michael Lyu · Maarten Sap

East Exhibition Hall A-B #E-1101
[ ] [ ] [ Project Page ]
Wed 16 Jul 11 a.m. PDT — 1:30 p.m. PDT

Abstract: Large language model-based multi-agent systems have shown great abilities across various tasks due to the collaboration of expert agents, each focusing on a specific domain. However, the impact of clumsy or even malicious agents—those who frequently make errors in their tasks—on the overall performance of the system remains underexplored. This paper investigates: (1) What is the resilience of various system structures (e.g., A$\rightarrow$B$\rightarrow$C, A$\leftrightarrow$B$\leftrightarrow$C) under faulty agents, on different downstream tasks? (2) How can we increase system resilience to defend against these agents? To simulate faulty agents, we propose two approaches—AutoTransform and AutoInject—which introduce mistakes into the agents' responses. Experiments on four downstream tasks using six systems show that the "hierarchical" structure, i.e., A$\rightarrow$(B$\leftrightarrow$C), exhibits superior resilience with the lowest performance drop of 5.5%, compared to 10.5% and 23.7% of other two structures. To further improve resilience, we introduce (1) Challenger, that introduces a mechanism for each agent to challenge others' outputs, and (2) Inspector, an additional agent to review and correct messages, recovering up to 96.4% errors made by faulty agents. Our code and data are available at https://github.com/CUHK-ARISE/MAS-Resilience.

Lay Summary:

Teams of AI “agents” built on large language models can solve coding, maths and translation tasks, but one careless—or malicious—agent can poison the discussion and drag down the whole team. We create two automated stress-tests: AutoTransform, which rewrites an agent’s role so it secretly adds mistakes, and AutoInject, which slips errors directly into its messages. Using them, we explore how different multi-agent structures (linear chains, flat peer groups and human-like hierarchies) and different tasks suffer from these faulty agents. A hierarchical structure—one “boss” overseeing peer agents—proved most robust, losing only ≈ 5% accuracy, while a simple chain collapsed by ≈ 24%. Adding two simple safeguards—a “Challenger” ability that lets agents question each other and an independent “Inspector” reviewer—recovered up to 96% of the lost performance. Our open-source toolkit lets researchers and companies quickly gauge and harden the resilience of their AI agents before deploying in the wild.

Chat is not available.