Skip to yearly menu bar Skip to main content


Spotlight Poster

Re-ranking Reasoning Context with Tree Search Makes Large Vision-Language Models Stronger

Qi Yang · Chenghao Zhang · Lubin Fan · Kun Ding · Jieping Ye · Shiming Xiang

East Exhibition Hall A-B #E-3308
[ ] [ ] [ Project Page ]
Tue 15 Jul 4:30 p.m. PDT — 7 p.m. PDT

Abstract:

Recent advancements in Large Vision Language Models (LVLMs) have significantly improved performance in Visual Question Answering (VQA) tasks through multimodal Retrieval-Augmented Generation (RAG). However, existing methods still face challenges, such as the scarcity of knowledge with reasoning examples and erratic responses from retrieved knowledge. To address these issues, in this study, we propose a multimodal RAG framework, termed RCTS, which enhances LVLMs by constructing a Reasoning Context-enriched knowledge base and a Tree Search re-ranking method. Specifically, we introduce a self-consistent evaluation mechanism to enrich the knowledge base with intrinsic reasoning patterns. We further propose a Monte Carlo Tree Search with Heuristic Rewards (MCTS-HR) to prioritize the most relevant examples. This ensures that LVLMs can leverage high-quality contextual reasoning for better and more consistent responses. Extensive experiments demonstrate that our framework achieves state-of-the-art performance on multiple VQA datasets, significantly outperforming In-Context Learning (ICL) and Vanilla-RAG methods. It highlights the effectiveness of our knowledge base and re-ranking method in improving LVLMs.

Lay Summary:

Visual Question Answering systems, which answer questions about images, often struggle when they lack enough examples showing how to reason through complex questions. Even when they find relevant examples, their answers can be inconsistent or unreliable.To solve this, we developed a new framework called RCTS that helps AI models better understand and use existing knowledge. Our method builds a richer knowledge base by identifying and reinforcing consistent reasoning patterns. We also introduced a smart search technique, inspired by game-playing strategies, to pick the most helpful examples for answering each question.This approach significantly improves the accuracy and consistency of AI-generated answers on a variety of image-based question-answering tasks. Our results show that RCTS outperforms current leading methods, offering a promising step forward in making AI systems more reliable when interpreting visual content and responding to natural language questions.

Chat is not available.