Skip to yearly menu bar Skip to main content


Poster
in
Affinity Workshop: New In ML

Investigating Redundancy in Multimodal Large Language Models with Multiple Vision Encoders

Song Mao · Yang Chen


Abstract:

Multimodal Large Language Models (MLLMs) increasingly adopt multiple vision encoders to capture diverse visual information, ranging from coarse semantics to fine-grained details. While this approach is intended to enhance visual understanding capability, we observe that the performance gains from adding encoders often diminish and can even lead to performance degradation—a phenomenon we term \emph{encoder redundancy}.This paper presents a systematic investigation into this issue. Through comprehensive ablation studies on state-of-the-art multi-encoder MLLMs, we empirically demonstrate that significant redundancy exists. To quantify each encoder’s unique contribution, we propose a principled metric: the Conditional Utilization Rate (CUR). Building on CUR, we introduce the Information Gap (IG) to capture the overall disparity in encoder utility within a model. Our experiments reveal that certain vision encoders contribute little—or even negatively—to overall performance, confirming substantial redundancy. Our experiments reveal that certain vision encoders contribute minimally—or even negatively—to the model's performance, confirming the prevalence of redundancy. These findings highlight critical inefficiencies in current multi-encoder designs and establish that our proposed metrics can serve as valuable diagnostic tools for developing more efficient and effective multimodal architectures.

Chat is not available.