Poster
in
Workshop: Multi-Agent Systems in the Era of Foundation Models: Opportunities, Challenges and Futures
OWLViz: An Open-World Benchmark for Visual Question Answering
Thuy Nguyen · Dang Nguyen · Nguyen Hoang · Thuan Luong · Franck Dernoncourt · Long Dang · Viet Lai
We present a challenging benchmark for the Open WorLd VISual (OWLViz) question answering benchmark. OWLVIz presents short queries that require integrating multiple capabilities, including common-sense knowledge, visual understanding, web exploration, and specialized tool usage. While humans achieve 69.2% accuracy on these intuitive tasks, even state-of-the-art VLMs struggle, with the best model, Gemini, achieving only 27.09% accuracy. Current tool-calling agents and GUI agents, which rely on limited vision and vision-language models as tools, perform even worse. This performance gap reveals significant limitations in multimodal systems' ability to select appropriate tools and execute complex reasoning sequences, establishing new directions for advancing practical AI research.