Skip to yearly menu bar Skip to main content


Poster

Look Twice Before You Answer: Memory-Space Visual Retracing for Hallucination Mitigation in Multimodal Large Language Models

Xin Zou · Yizhou WANG · Yibo Yan · Yuanhuiyi Lyu · Kening Zheng · Sirui Huang · Junkai Chen · Peijie Jiang · Jia Liu · Chang Tang · Xuming Hu

East Exhibition Hall A-B #E-2705
[ ] [ ]
Wed 16 Jul 11 a.m. PDT — 1:30 p.m. PDT

Abstract:

Despite their impressive capabilities, Multimodal Large Language Models (MLLMs) are prone to hallucinations, i.e., the generated content that is nonsensical or unfaithful to input sources.Unlike in LLMs, hallucinations in MLLMs often stem from the sensitivity of text decoder to visual tokens, leading to a phenomenon akin to "amnesia" about visual information.To address this issue, we propose MemVR, a novel decoding paradigm inspired by common cognition: when the memory of an image seen the moment before is forgotten, people will look at it again for factual answers. Following this principle, we treat visual tokens as supplementary evidence, re-injecting them into the MLLM through Feed Forward Network (FFN) as “key-value memory” at the middle trigger layer. This look-twice mechanism occurs when the model exhibits high uncertainty during inference, effectively enhancing factual alignment. Comprehensive experimental evaluations demonstrate that MemVR significantly mitigates hallucination across various MLLMs and excels in general benchmarks without incurring additional time overhead.

Lay Summary:

How do multimodal large language models (MLLMs) handle hallucinations? Hallucinations in MLLMs often arise from the text decoder's sensitivity to visual tokens, causing a kind of “amnesia” about visual information. To tackle this, we propose MemVR, a new decoding paradigm inspired by human behavior of looking at an image again when memory fails. MemVR treats visual tokens as evidence and re-injects them as “key-value memory”. This “look-twice” mechanism reduces hallucination, performs well in benchmarks without extra time cost. MemVR is a plug-and-play, task-agnostic method with wide applicability.

Chat is not available.