Skip to yearly menu bar Skip to main content


Poster

Adversarial Inputs for Linear Algebra Backends

Jonas Möller · Lukas Pirch · Felix Weissberg · Sebastian Baunsgaard · Thorsten Eisenhofer · Konrad Rieck

East Exhibition Hall A-B #E-2108
[ ] [ ]
Thu 17 Jul 4:30 p.m. PDT — 7 p.m. PDT

Abstract:

Linear algebra is a cornerstone of neural network inference. The efficiency of popular frameworks, such as TensorFlow and PyTorch, critically depends on backend libraries providing highly optimized matrix multiplications and convolutions. A diverse range of these backends exists across platforms, including Intel MKL, Nvidia CUDA, and Apple Accelerate. Although these backends provide equivalent functionality, subtle variations in their implementations can lead to seemingly negligible differences during inference. In this paper, we investigate these minor discrepancies and demonstrate how they can be selectively amplified by adversaries. Specifically, we introduce Chimera examples, inputs to models that elicit conflicting predictions depending on the employed backend library. These inputs can even be constructed with integer values, creating a vulnerability exploitable from real-world input domains. We analyze the prevalence and extent of the underlying attack surface and propose corresponding defenses to mitigate this threat.

Lay Summary:

Neural networks rely heavily on math operations, such as multiplying large tables of numbers. To run these calculations efficiently, machine learning systems use specialized software (called linear algebra backends) that are tailored to different types of hardware, like Intel processors, Nvidia graphics cards, or Apple devices. While all these backends perform the same basic computations, they do so in slightly different ways. These tiny differences usually don't matter. In this paper, however, we show that attackers can exploit them. We introduce what we call Chimera examples: carefully crafted inputs that make a neural network produce different results depending on the backend it uses. We examine how often these differences appear, how impactful they can be, and how to defend against this kind of vulnerability.

Chat is not available.