Skip to yearly menu bar Skip to main content


Poster
in
Workshop: The 2nd Workshop on Reliable and Responsible Foundation Models

GLSim: Detecting Object Hallucinations in LVLMs via Global-Local Similarity

Seongheon Park · Sharon Li

Keywords: [ multi-modal large language models ] [ object hallucination detection ] [ large vision-language models ]


Abstract:

Object hallucination in large vision-language models presents a significant challenge to their safe deployment in real-world applications.Recent works have proposed object-level hallucination scores to estimate the likelihood of object hallucination; however, these methods typically adopt either a global or local perspective in isolation, which may limit detection reliability. In this paper, we introduce GLSim, a novel training-free object hallucination detection framework that leverages complementary global and local embedding similarity signals between image and text modalities, enabling more accurate and reliable hallucination detection in diverse scenarios.We comprehensively benchmark existing object hallucination detection methods and demonstrate that GLSim achieves superior detection performance, outperforming competitive baselines by a significant margin.

Chat is not available.