Skip to yearly menu bar Skip to main content


Poster
in
Workshop: DataWorld: Unifying data curation frameworks across domains

AutoDavis: Automatic and Dynamic Evaluation Protocol of Large Vision-Language Models on Visual Question-Answerin

Han Bao · Yue Huang · Yanbo Wang · Jiayi Ye · Xiangqi Wang · Xiuying Chen · Yue Zhao · Tianyi Zhou · Mohamed Elhoseiny · Xiangliang Zhang

Keywords: [ LVLM ] [ automatic evaluation ]


Abstract:

Large Vision-Language Models (LVLMs) have become essential for advancing the integration of visual and linguistic information. However, the evaluation of LVLMs presents significant challenges as the evaluation suites like benchmarks and datasets always demand lots of human cost for the construction, and remains static, lacking flexibility. Even though automatic evaluation has been explored in textual modality, the visual modality remains under-explored. As a result, in this work, we introduce \textsc{AutoDavis}, an automatic and dynamic framework for serving evaluation on demand, \emph{i.e.}, benchmarking LVLMs based on specific aspects of model capability. \textsc{AutoDavis} leverages text-to-image models to generate relevant image samples and then utilizes LVLMs to orchestrate visual question-answering (VQA) tasks, completing the evaluation process efficiently and flexibly. Through an extensive evaluation of nine popular LVLMs across five demanded user inputs (\emph{i.e.}, evaluation capabilities), the framework shows effectiveness and reliability.

Chat is not available.