Skip to yearly menu bar Skip to main content


Poster

ReFocus: Visual Editing as a Chain of Thought for Structured Image Understanding

Xingyu Fu · Minqian Liu · Zhengyuan Yang · John Corring · Yijuan Lu · Jianwei Yang · Dan Roth · Dinei Florencio · Cha Zhang

West Exhibition Hall B2-B3 #W-202
[ ] [ ] [ Project Page ]
Tue 15 Jul 4:30 p.m. PDT — 7 p.m. PDT

Abstract:

Structured image understanding, such as interpreting tables and charts, requires strategically refocusing across various structures and texts within an image, forming a reasoning sequence to arrive at the final answer. However, current multimodal large language models (LLMs) lack this multihop selective attention capability. In this work, we introduce ReFocus, a simple yet effective framework that equips multimodal LLMs with the ability to generate ``visual thoughts'' by performing visual editing on the input image through code, shifting and refining their visual focuses. Specifically, ReFocus enables multimodal LLMs to generate Python codes to call tools and modify the input image, sequentially drawing boxes, highlighting sections, and masking out areas, thereby enhancing the visual reasoning process. We experiment upon a wide range of structured image understanding tasks involving tables and charts. ReFocus largely improves performance on all tasks over GPT-4o without visual editing, yielding an average gain of 11.0% on table tasks and 6.8% on chart tasks. We present an in-depth analysis of the effects of different visual edits, and reasons why ReFocus can improve the performance without introducing additional information. Further, we collect a 14k training set using ReFocus, and prove that such visual chain-of-thought with intermediate information offers a better supervision than standard VQA data, reaching a 8.0% average gain over the same model trained with QA pairs and 2.6% over CoT.

Lay Summary:

We teach multimodal LLMs to think with images -- as an intermediate thought -- to solve structured image problems (tables and charts). Given an image and a question, LLMs are prompted to generate python code to call the provided tool functions, to directly edit on the input image. With the intermediate edit outputs, models can understand the original question much better, with 7-11% average gain. We call this method ReFocus, a visual chain-of-thought method to help models re-focus on the important parts on the input images. We then apply this method to collect a 14K training set comprising the prompted intermediate edits and final results. We show that such training data can provide better supervision compared to standard QA pairs or text-only CoT pairs, reaching a 8.0% average gain over the same model trained with QA pairs and 2.6% over CoT.

Chat is not available.