Poster
Enhancing Cooperative Multi-Agent Reinforcement Learning with State Modelling and Adversarial Exploration
Andreas Kontogiannis · Konstantinos Papathanasiou · Yi Shen · Giorgos Stamou · Michael Zavlanos · George Vouros
West Exhibition Hall B2-B3 #W-710
Cooperating in complex environments is tough for AI agents, especially when they can’t see the whole picture or talk to each other. This paper tackles that challenge by helping agents better understand their surroundings using only what they individually observe. The key idea is to create a smarter way for each agent to guess what’s going on in the environment and to use that guess to make better decisions—both for exploring and working together with others. We introduce a new approach called SMPE², which gives agents two big advantages. First, it helps them build better internal representations (or "beliefs") about the world. Second, it trains them to explore in a way that helps both themselves and their teammates discover useful parts of the environment. Our method makes agents not just smarter individually, but better at teamwork. Tests on standard AI cooperation tasks show that SMPE² beats existing top-performing methods, especially in challenging, fully cooperative scenarios.