Skip to yearly menu bar Skip to main content


Poster

WMarkGPT: Watermarked Image Understanding via Multimodal Large Language Models

Tan Songbai · Xuerui Qiu · Yao Shu · Gang Xu · Linrui Xu · Xiangyu Xu · HUIPING ZHUANG · Ming Li · Fei Yu

East Exhibition Hall A-B #E-900
[ ] [ ]
Tue 15 Jul 4:30 p.m. PDT — 7 p.m. PDT

Abstract:

Invisible watermarking is widely used to protect digital images from unauthorized use. Accurate assessment of watermarking efficacy is crucial for advancing algorithmic development. However, existing statistical metrics, such as PSNR, rely on access to original images, which are often unavailable in text-driven generative watermarking and fail to capture critical aspects of watermarking, particularly visibility. More importantly, these metrics fail to account for potential corruption of image content. To address these limitations, we propose WMarkGPT, the first multimodal large language model (MLLM) specifically designed for comprehensive watermarked image understanding, without accessing original images. WMarkGPT not only predicts watermark visibility but also generates detailed textual descriptions of its location, content, and impact on image semantics, enabling a more nuanced interpretation of watermarked images. Tackling the challenge of precise location description and understanding images with vastly different content, we construct three visual question-answering (VQA) datasets: an object location-aware dataset, a synthetic watermarking dataset, and a real watermarking dataset. We introduce a meticulously designed three-stage learning pipeline to progressively equip WMarkGPT with the necessary abilities. Extensive experiments on synthetic and real watermarking QA datasets demonstrate that WMarkGPT outperforms existing MLLMs, achieving significant improvements in visibility prediction and content description. The datasets and code are released at https://github.com/TanSongBai/WMarkGPT.

Lay Summary:

Digital watermarks are hidden markers used to protect images from misuse, but it's hard to measure how well they work without comparing them to the original image. Current methods also don't fully assess how visible the watermark is or how it affects the image's content.To solve this, we created WMarkGPT, the first multimodal large language model that can analyze watermarked images without needing the original. It not only detects how noticeable the watermark is but also describes its location, content, and impact on the image in detail—like explaining if it distorts a person's face or blends into the background.To train WMarkGPT, we built three specialized datasets and developed a step-by-step learning process to teach the model these skills. Tests show it outperforms other AI models in judging watermark visibility and describing images accurately.

Chat is not available.