Poster
in
Workshop: Multi-Agent Systems in the Era of Foundation Models: Opportunities, Challenges and Futures
Measuring Competition and Cooperation in LLM Bargaining: An Empirical Meta-Game Analysis
Gabriel Smithline · Chris Mascioli · Mithun Chakraborty · Michael Wellman
Abstract:
We conduct an empirical game-theoretic analysis of how large-language models negotiate the division of a set of subjectively valued items.The LLM agents represent $\textit{meta-strategies}$, mapping a prompt describing the bargaining scenario to their implemented strategy for negotiation. To evaluate their relative performance, we formulate and estimate an empirical $\textit{meta-game}$ model over the LLM agents.We identify equilibria in this game model, and analyze the agents' competitiveness and fairness in the equilibrium context according to measures of regret, welfare, and envy-freeness. Uncertainty in these estimates is quantified using a statistical bootstrapping approach.Across nine LLM variants plus RL and heuristic baselines, OpenAI models (o3-mini, GPT-4o) deliver the best welfare–fairness trade-off and lowest exploitability. A rigid, Tough strategy exposes systematic weaknesses in Gemini and Claude models, underscoring the value of adversarial evaluation. Analysis of gradations of prompts shows that incremental strategic guidance reduces blatant mistakes.
Chat is not available.