Poster
Probably Approximately Global Robustness Certification
Peter Blohm · Patrick Indri · Thomas Gärtner · SAGAR MALHOTRA
East Exhibition Hall A-B #E-2200
We introduce a new way to check how well a classification system, like a neural network, handles small, ill-intended changes to its input, i.e., its robustness.Neural networks can commonly be fooled by tiny changes to the input in specific ways, which opens them up to manipulation that is not easily detected by humans.Methods that give absolute formal robustness guarantees are too slow for large neural networks, while faster methods just test the network on a set of samples but do not offer certainty in its robustness for other data points. Our method strikes a balance by providing guarantees while being efficient.We formalize how many examples are enough to achieve a desired level of certainty for robustness on new data.Importantly, the number of tests needed does not grow with the size or complexity of the system, making our method feasible even for large models.