Poster
LaCache: Ladder-Shaped KV Caching for Efficient Long-Context Modeling of Large Language Models
Dachuan Shi · Yonggan Fu · Xiangchi Yuan · Zhongzhi Yu · Haoran You · Sixu Li · Xin Dong · Jan Kautz · Pavlo Molchanov · Yingyan (Celine) Lin
East Exhibition Hall A-B #E-2011
Recent advancements in Large Language Models (LLMs) have spurred interest in numerous applications requiring robust long-range capabilities, essential for processing extensive input contexts and continuously generating extended outputs. As sequence lengths increase, the number of Key-Value (KV) pairs in LLMs escalates, creating a significant efficiency bottleneck.In this paper, we propose a new KV cache optimization paradigm called LaCache, a training-free method for efficient and accurate generative inference of LLMs. LaCache enables LLMs to simultaneously address both of the critical challenges in long-range modeling: robust long-range capabilities and continuous generation without running out-of-memory (OOM). Specifically, LaCache integrates two key innovations: (1) a ladder-shaped KV cache pattern that stores KV pairs not only sequentially (left-to-right within each layer) but also across layers (from shallow to deep), providing an extended span for capturing long-range dependencies under a fixed storage budget, thereby boosting long-range capabilities; and (2) an iterative compaction mechanism that progressively compresses older caches, freeing up space for new tokens within a fixed cache size. This token distance-based dynamic compression enables more effective continuous generation under constrained cache budgets.Experiments across various tasks, benchmarks, and LLM models consistently validate LaCache's effectiveness in enhancing LLMs' long-range capabilities. Our code is available at https://github.com/GATECH-EIC/LaCache.
Large Language Models (LLMs) encounter substantial challenges when handling extensive input contexts and continuously generating extended outputs. Specifically, the increasing number of Key-Value (KV) pairs during generation results in a significant efficiency bottleneck. In this paper, we propose a new KV cache optimization paradigm called LaCache, which enables LLMs with robust long-range capabilities and supports continuous generation without running out-of-memory (OOM). LaCache integrates two key innovations: (1) a ladder-shaped KV cache pattern that stores KV pairs not only sequentially (left-to-right within each layer) but also across layers (from shallow to deep), providing an extended span for capturing long-range dependencies under a fixed storage budget, thereby boosting long-range capabilities; and (2) an iterative compaction mechanism that progressively compresses older caches, freeing up space for new tokens within a fixed cache size. This token distance-based dynamic compression enables more effective continuous generation under constrained cache budgets.