Poster
Mind the Gap: A Practical Attack on GGUF Quantization
Kazuki Egashira · Robin Staab · Mark Vero · Jingxuan He · Martin Vechev
East Exhibition Hall A-B #E-704
Quantization is a key technique for running large language models (LLMs) more efficiently.It reduces memory usage without sacrificing performance.In general, a model is expected to behave similarly before and after quantization.However, if an attacker trains a model with bad intentions, they can make it behave safely in full precision, and only trigger harmful behavior after it is quantized.This is risky because a user might try out a full-precision model, decide it is safe and useful, then quantize it to run on a smaller device, only to unknowingly activate a hidden attack.While similar attack ideas have been explored in past research, they have mostly focused on classical, simpler quantization methods.Here we show for the first time that this kind of attack also works on a more accurate and widely used quantization format in real-world deployments, the GGUF format.