Skip to yearly menu bar Skip to main content


Poster

FlatQuant: Flatness Matters for LLM Quantization

Yuxuan Sun · Ruikang Liu · Haoli Bai · Han Bao · Kang Zhao · Yuening Li · JiaxinHu · Xianzhi Yu · Lu Hou · Chun Yuan · Xin Jiang · Wulong Liu · Jun Yao

East Exhibition Hall A-B #E-2903
[ ] [ ] [ Project Page ]
Tue 15 Jul 11 a.m. PDT — 1:30 p.m. PDT

Abstract:

Recently, quantization has been widely used for the compression and acceleration of large language models (LLMs). Due to the outliers in LLMs, it is crucial to flatten weights and activations to minimize quantization error with equally spaced quantization points. Prior research explores various pre-quantization transformations to suppress outliers, such as per-channel scaling and Hadamard transformation. However, we observe that these transformed weights and activations can still exhibit steep and dispersed distributions. In this paper, we propose FlatQuant (Fast and Learnable Affine Transformation), a new post-training quantization approach that enhances the flatness of weights and activations. Our approach identifies optimal affine transformations for each linear layer, calibrated in hours via a lightweight objective. To reduce runtime overhead of affine transformation, we apply Kronecker product with two lightweight matrices, and fuse all operations in FlatQuant into a single kernel. Extensive experiments demonstrate that FlatQuant establishes a new state-of-the-art benchmark for quantization. For example, it achieves less than 1\% accuracy drop for W4A4 quantization on the LLaMA-3-70B model, surpassing SpinQuant by 7.5\%. Additionally, it provides up to 2.3x prefill speedup and 1.7x decoding speedup compared to the FP16 model. Code is available at: https://github.com/ruikangliu/FlatQuant.

Lay Summary:

Low-bit quantization is a widely used technique to compress large language models (LLMs) and accelerate inference. While 8-bit quantization is common, reducing precision to 4-bit often leads to significant accuracy loss due to extreme outliers in tensors, especially in activations.We find that improving the flatness of tensors can greatly reduce this quantization loss. Our method proposes a new post-training quantization approach that uses affine transformations to smooth out outliers, enabled by algorithm–system co-design. To ensure efficiency, we construct the transformations using lightweight Kronecker-product-structured matrices, optimized for each layer's outlier distribution.Our approach achieves less than 1% accuracy drop on major LLMs like LLaMA-3 under full 4-bit quantization while maintaining strong speedups, making 4-bit quantization far more practical for real-world applications.

Chat is not available.