Skip to yearly menu bar Skip to main content


Poster
in
Workshop: DIG-BUGS: Data in Generative Models (The Bad, the Ugly, and the Greats)

Detective SAM: Adapting SAM to Localize Diffusion-based Forgeries via Embedding Artifacts

Gert Lek · Chaoyi Zhu · Pin-Yu Chen · Robert Birke · Lydia Y. Chen

Keywords: [ Image Forgery Localization ] [ Diffusion-Based Editing ] [ Learnable Prompts ] [ Segment Anything Model (SAM) ] [ Forensic Perturbation Signals ] [ Foundation Models ]

[ ] [ Project Page ]
Sat 19 Jul 3 p.m. PDT — 3:45 p.m. PDT

Abstract:

Image forgery localization in the diffusion eraposes new challenges as modern editing pipelinesproduce photorealistic, semantically coherent manipulations that bypass conventional detectors.While some recent methods leverage foundationmodel cues or handcrafted noise residuals, theystill miss the subtle embedding artifacts introduced by modern diffusion pipelines. In response,we develop Detective SAM, which extends theSegment Anything Model by incorporating a blur based detection signal, learnable coarse-to-fineprompt generation, and lightweight fine-tuningfor automatic forgery mask generation. Detective SAM localizes forgeries with high precision.On three challenging benchmarks (MagicBrush,CoCoGlide, and IMD2020), it outperforms priorstate-of-the-art methods, demonstrating the powerof combining explicit forensic perturbation cueswith foundation-model adaptation for robust image forgery localization in the diffusion era. The code will be published in the anonymous repository https://anonymous.4open.science/r/DetectiveSAM-BC61 .

Chat is not available.