Spotlight Poster
Auditing $f$-differential privacy in one run
Saeed Mahloujifar · Luca Melis · Kamalika Chaudhuri
East Exhibition Hall A-B #E-1009
[
Abstract
]
[
Lay Summary
]
Oral
presentation:
Oral 4C Privacy and Uncertainty Quantification
Wed 16 Jul 3:30 p.m. PDT — 4:30 p.m. PDT
[
OpenReview]
Wed 16 Jul 4:30 p.m. PDT
— 7 p.m. PDT
Wed 16 Jul 3:30 p.m. PDT — 4:30 p.m. PDT
Abstract:
Empirical auditing has emerged as a means of catching some of the flaws in the implementation of privacy-preserving algorithms. Existing auditing mechanisms, however, are either computationally inefficient -- requiring multiple runs of the machine learning algorithms —- or suboptimal in calculating an empirical privacy. In this work, we present a tight and efficient auditing procedure and analysis that can effectively assess the privacy of mechanisms. Our approach is efficient; Similar to the recent work of Steinke, Nasr and Jagielski (2023), our auditing procedure leverages the randomness of examples in the input dataset and requires only a single run of the target mechanism. And it is more accurate; we provide a novel analysis that enables us to achieve tight empirical privacy estimates by using the hypothesized $f$-DP curve of the mechanism, which provides a more accurate measure of privacy than the traditional $\epsilon,\delta$ differential privacy parameters. We use our auditing procure and analysis to obtain empirical privacy, demonstrating that our auditing procedure delivers tighter privacy estimates.
Lay Summary:
We design tests to check if a machine learning algorithm truly protects privacy as claimed. These tests are similar to known attacks that try to tell whether specific data was used during training, but our method only needs to run the algorithm once—making it more efficient. Compared to earlier approaches, our method catches more privacy failures.
Chat is not available.