Skip to yearly menu bar Skip to main content


Poster
in
Workshop: The 2nd Workshop on Reliable and Responsible Foundation Models

Evaluating Adversarial Protections for Diffusion Personalization: A Comprehensive Study

Kai Ye · Tianyi Chen · Zhen Wang

Keywords: [ Diffusion Models ] [ Privacy Protection ] [ Personalized Image Generation ] [ Adversarial Perturbations ] [ Robustness Evaluation ]


Abstract:

With the increasing adoption of diffusion models for image generation and personalization, concerns regarding privacy breaches and content misuse have become more pressing. In this study, we conduct a comprehensive comparison of eight perturbation-based protection methods—AdvDM, ASPL, FSGM, MetaCloak, Mist, PhotoGuard, SDS, and SimAC—across both portrait and artwork domains. These methods are evaluated under varying perturbation budgets, using a range of metrics to assess visual imperceptibility and protective efficacy. Our results offer practical guidance for method selection. Code is available at: https://github.com/vkeilo/DiffAdvPerturbationBench.

Chat is not available.