Skip to yearly menu bar Skip to main content


Poster

ParallelComp: Parallel Long-Context Compressor for Length Extrapolation

Jing Xiong · Jianghan Shen · Chuanyang Zheng · Zhongwei Wan · Chenyang Zhao · Chiwun Yang · Fanghua Ye · Hongxia Yang · Lingpeng Kong · Ngai Wong

East Exhibition Hall A-B #E-3206
[ ] [ ] [ Project Page ]
Wed 16 Jul 4:30 p.m. PDT — 7 p.m. PDT

Abstract:

Extrapolating ultra-long contexts (text length >128K) remains a major challenge for large language models (LLMs), as most training-free extrapolation methods are not only severely limited by memory bottlenecks, but also suffer from the attention sink, which restricts their scalability and effectiveness in practice. In this work, we propose ParallelComp, a parallel long-context compression method that effectively overcomes the memory bottleneck, enabling 8B-parameter LLMs to extrapolate from 8K to 128K tokens on a single A100 80GB GPU in a training-free setting. ParallelComp splits the input into chunks, dynamically evicting redundant chunks and irrelevant tokens, supported by a parallel KV cache eviction mechanism. Importantly, we present a systematic theoretical and empirical analysis of attention biases in parallel attention—including the attention sink, recency bias, and middle bias—and reveal that these biases exhibit distinctive patterns under ultra-long context settings. We further design a KV cache eviction technique to mitigate this phenomenon. Experimental results show that ParallelComp enables an 8B model (trained on 8K context) to achieve 91.17% of GPT-4's performance under ultra-long contexts, outperforming closed-source models such as Claude-2 and Kimi-Chat. We achieve a 1.76x improvement in chunk throughput, thereby achieving a 23.50x acceleration in the prefill stage with negligible performance loss and pave the way for scalable and robust ultra-long contexts extrapolation in LLMs. We release the code at https://github.com/menik1126/ParallelComp.

Lay Summary:

Current large language models (LLMs) still face significant challenges when processing ultra-long texts (over 128,000 tokens), primarily due to computational resource limitations and bias issues within the attention mechanism. We propose a new method called ParallelComp, which significantly reduces memory usage, enabling models to successfully handle texts ranging from 4,000 to 128,000 tokens without retraining. This method divides long texts into smaller chunks and processes them in parallel while automatically removing redundant or irrelevant parts, greatly improving efficiency and performance.We also analyze common biases that models exhibit when processing long texts—such as overemphasizing the beginning or end—and demonstrate that our method mitigates these issues. In experiments, our method enabled a medium-sized model (8 billion parameters) to perform exceptionally well on ultra-long text tasks, reaching performance close to GPT-4 and even surpassing some closed-source models.

Chat is not available.