Skip to yearly menu bar Skip to main content


Poster
in
Workshop: 1st Workshop on Foundation Models for Structured Data (FMSD)

One-Run Privacy Auditing for Structured Generative and Foundation Models

Rishav Chourasia · Zilong Zhao · Uzair Javaid


Abstract: Generative models are gaining traction in synthetic data generation but see limited industry adoption because of lack of standardized data utility, fidelity, and especially privacy metrics. In this paper, we focus on privacy and propose a practical $\epsilon$-differential privacy auditing technique focused on structured generative and foundation models that measures memorization via nearest-neighbor distances between real training data and generated synthetic samples. By independently selecting a small subset of training data for auditing, our method operates in a single training run and treats the generative pipeline as a black box. Our approach models synthetic samples as reconstruction attacks and yields significantly stronger lower bounds on privacy loss than traditional membership inference attacks. We test our technique on five tabular generative models and one foundation model, and show our method provides a robust baseline to evaluate privacy of generative models.

Chat is not available.