Skip to yearly menu bar Skip to main content


Poster
in
Workshop: ES-FoMo III: 3rd Workshop on Efficient Systems for Foundation Models

Cache Saver: A Modular Framework for Efficient, Affordable, and Reproducible LLM Inference

Nearchos Potamitis · Lars Klein · Chongyang Xu · Attreyee Mukherjee · Bardia Mohammadi · Niket Tandon · Laurent Bindschaedler · Akhil Arora


Abstract:

Inference constitutes the majority of costs throughout the lifecycle of a large language model (LLM). While numerous LLM inference engines focusing primarily on low-level optimizations have been developed, there is a scarcity of non-intrusive client-side frameworks that perform high-level optimizations. In this paper, we introduce CacheSaver, a modular, plug-and-play, and asynchronous framework that facilitates high-level inference optimizations, thereby integrating cleanly into existing systems without requiring changes to the end-user application logic or the underlying LLM. The key novelty is a namespace-aware list-valued cache that ensures statistical integrity of LLM responses by generating independent and identically distributed responses within a namespace as well as ensuring reproducibility. Moreover, as a direct consequence of operating at a high level, CacheSaver supports both local and online models. We conduct extensive experiments with five representative state-of-the-art reasoning strategies, five diverse benchmark tasks, and three different LLMs. On average across all methods, tasks, and LLMs, CacheSaver reduces cost by approximately 25% and CO2 emissions by approximately 35%. Notably, CacheSaver excels in practical machine learning scenarios such as benchmarking across multiple methods or conducting ablation analysis of a specific method, obtaining substantial cost and carbon footprint reduction of approximately 60%. CacheSaver is publicly available at https://github.com/au-clan/cachesaver

Chat is not available.