Poster
FloE: On-the-Fly MoE Inference on Memory-constrained GPU
Yuxin Zhou · Zheng Li · Jun Zhang · Jue Wang · Yiping Wang · Zhongle Xie · Ke Chen · Lidan Shou
East Exhibition Hall A-B #E-2602
Language models known as "Mixture-of-Experts" (MoE) are powerful tools, but their huge size makes it difficult to run them quickly on devices with limited memory, such as consumer-grade GPUs. To manage this, some systems temporarily store model parts on slower memory (e.g., CPU main memory) and load them only when needed—but this method is slow, especially when quick responses are crucial.We developed a new approach called FloE, which cleverly compresses parts of these models to significantly speed up the process. FloE finds hidden redundancies within the model's expert components—essentially, unnecessary details that can be trimmed without significantly harming accuracy. By reducing the size of the experts’ internal data, FloE lets these large models fit comfortably into small memory spaces.Our tests show that FloE makes these models almost 49 times faster on common GPUs and reduces the memory requirement dramatically, all while maintaining excellent performance. This advancement makes powerful machine learning tools accessible for more users, even with limited hardware resources.