Skip to yearly menu bar Skip to main content


Poster

Probably Approximately Global Robustness Certification

Peter Blohm · Patrick Indri · Thomas Gärtner · SAGAR MALHOTRA

East Exhibition Hall A-B #E-2200
[ ] [ ]
Tue 15 Jul 4:30 p.m. PDT — 7 p.m. PDT

Abstract: We propose and investigate probabilistic guarantees for the adversarial robustness of classification algorithms.While traditional formal verification approaches for robustness are intractable and sampling-based approaches do not provide formal guarantees, our approach is able to efficiently certify a probabilistic relaxation of robustness.The key idea is to sample an $\epsilon$-net and invoke a local robustness oracle on the sample.Remarkably, the size of the sample needed to achieve probably approximately global robustness guarantees is independent of the input dimensionality, the number of classes, and the learning algorithm itself.Our approach can, therefore, be applied even to large neural networks that are beyond the scope of traditional formal verification.Experiments empirically confirm that it characterizes robustness better thanstate-of-the-art sampling-based approaches and scales better than formal methods.

Lay Summary:

We introduce a new way to check how well a classification system, like a neural network, handles small, ill-intended changes to its input, i.e., its robustness.Neural networks can commonly be fooled by tiny changes to the input in specific ways, which opens them up to manipulation that is not easily detected by humans.Methods that give absolute formal robustness guarantees are too slow for large neural networks, while faster methods just test the network on a set of samples but do not offer certainty in its robustness for other data points. Our method strikes a balance by providing guarantees while being efficient.We formalize how many examples are enough to achieve a desired level of certainty for robustness on new data.Importantly, the number of tests needed does not grow with the size or complexity of the system, making our method feasible even for large models.

Chat is not available.