Skip to yearly menu bar Skip to main content


Poster

Efficiently Serving Large Multimodal Models Using EPD Disaggregation

Gursimran Singh · Xinglu Wang · Yifan Hu · Timothy Yu · Linzi Xing · Wei Jiang · Zhefeng Wang · Bai Xiaolong · Yi Li · Ying Xiong · Yong Zhang · Zhenan Fan

West Exhibition Hall B2-B3 #W-521
[ ] [ ] [ Project Page ]
Wed 16 Jul 11 a.m. PDT — 1:30 p.m. PDT

Abstract:

Large Multimodal Models (LMMs) extend Large Language Models (LLMs) by handling diverse inputs such as images, audio, and video, but at the cost of adding a multimodal encoding stage that increases both computational and memory overhead. This step negatively affects key Service Level Objectives (SLOs), such as time to first token (TTFT) and time per output token (TPOT). We introduce Encode-Prefill-Decode (EPD) Disaggregation, a novel framework that separates the encoding, prefill, and decode stages onto dedicated resources. Unlike current systems, which bundle encoding and prefill together, our approach decouples these steps, unlocking new opportunities and optimizations. These include a mechanism to cache multimedia tokens for efficient transfer, a novel way to parallelize the encoding load within a request, a module for optimal resource allocation for disaggregated serving, and a novel role-switching method to handle changing workload characteristics. Experimental evaluations with popular LMMs show substantial gains in memory efficiency (up to 15× lower peak memory utilization), batch sizes (up to 22× larger), 10× more images per request, and 2.2× larger KV caches. Furthermore, it leads to significant improvements in SLO attainment (up to 90–100% improvement) and TTFT (up to 71% reduction), compared to systems that do not disaggregate. The code is available at https://github.com/vbdi/epdserve.

Lay Summary:

Modern AI systems can now understand not just text, but also images 🖼️, audio 🔊, and video 🎥. These powerful tools—called multimodal models—power applications that can answer questions about pictures, assist with medical scans, or even analyze videos. ✨However, running these models is slow and memory-hungry, especially when dealing with high-resolution images or complex inputs. That’s because each model request goes through several heavy processing steps, and current systems make all those steps share the same resources—leading to traffic jams inside the computer. Our work introduces a smarter way to run these models. We break the process into three stages—understanding the multimodal input 🖼️, preparing the response 🧮, and generating the output ✍️—and assign each stage its own set of specialized GPUs. This separation avoids bottlenecks🚦and lets the system run more smoothly and efficiently. 🚀With our approach, the system can handle 10× more images, use 15× less memory, and respond up to 71% faster than current methods. This makes advanced AI tools more practical for real-world use—in areas like healthcare 🏥, creative work 🎨, and interactive digital assistants 🧑💻.

Chat is not available.