Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Assessing World Models: Methods and Metrics for Evaluating Understanding

MET-Bench: Multimodal Entity Tracking for Evaluating the Limitations of Vision-Language and Reasoning Models

Vanya Cohen · Ray Mooney

Keywords: [ entity tracking ] [ multimodal ]


Abstract:

We introduce MET-Bench, a multimodal entity tracking benchmark designed to evaluate the ability of vision-language models to track entity states across modalities. Using two structured domains, Chess and the Shell Game, we assess how frontier models integrate textual and image-based state updates. Our findings reveal a significant performance gap between text-based and image-based tracking. We show this performance gap stems from deficits in visual reasoning rather than perception and that explicit text-based reasoning strategies improve performance, yet limitations remain, especially in long-horizon multimodal scenarios. MET-Bench highlights the need for improved multimodal representations and reasoning techniques to bridge the gap between textual and visual entity tracking.

Chat is not available.