Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Assessing World Models: Methods and Metrics for Evaluating Understanding

Do Vision Language Models infer human intention without visual perspective-taking? Towards a scalable "One-Image-Probe-All" dataset

Bingyang Wang · Yijiang Li · Qingyang Zhou · Hui Yi Leong · Tianwei Zhao · Letian Ye · Hokin Deng · Dezhi Luo · Nuno Vasconcelos

Keywords: [ Knowledge Grounding ] [ Scalable Benchmark ] [ Theory-of-Mind (ToM) ] [ World Model ] [ Multi-Modal Large Language Model ]


Abstract:

At the core of understanding the knowledge grounding of Multimodal Large Language Models (MLLMs) are two key challenges: (1) ensuring fair comparability across concepts and (2) scaling multimodal datasets to reflect real-world complexity. This paper presents a solution through the Omni-Perspective benchmark, which scales the construction of a 5-level question-context-answers (QCAs) from 1 real-world image. This benchmark pertains to 3 concepts along the Theory-of-Mind (ToM) ability hierarchy in humans and is further divided into 10 fine-grained subdifficulties. Through inference tasks, complexity, and ablation analysis, we evaluate over 2,200 consolidated QCAs on 61 MLLMs. Our findings reveal a key observation: MLLMs follow the human ToM grounding pathway hypothesis with the exception of level-2 perspective taking. Furthermore, this dataset enables nuanced analysis of how such observations change across varying difficulty levels, modalities, distractor logic, and prompt types.

Chat is not available.