Poster
in
Workshop: ES-FoMo III: 3rd Workshop on Efficient Systems for Foundation Models
TMA-Adaptive FP8 Grouped GEMM: Eliminating Padding Requirements in Low-Precision Training and Inference on Hopper
zhongling su · Rong Fu · Weihan Cao · Jianfei Gao · Minxi Jin · PeiZhilin · Hui Wang
Abstract:
Current FP8 grouped GEMM implementations require padding each group to a fixed alignment (e.g., 128), incurring memory and computational overhead. We propose \textit{TMA-Adaptive FP8 Grouped GEMM}, which eliminates padding by dynamically adapting to variable group dimensions via (1) a TMA descriptor pool with $\log_2(block_M)$ preconfigured descriptors to handle all residual row cases through dynamic runtime selection and dual-phase load-store operations, achieving comprehensive coverage with minimal overhead, and (2) TMA-alignment-aware management to satisfy 16-byte global memory alignment and 128-byte shared memory alignment. Experiments demonstrate 1.7\% to 20.4\% speed up with up to 23.8\% memory reduction compared to padding operation plus state-of-the-art FP8 grouped GEMM, while maintaining full numerical equivalence for valid data. The source code is publicly available at an anonymous repository: \url{https://github.com/sukoncon/TMA-Adaptive-FP8-Grouped-GEMM}.
Chat is not available.