Skip to yearly menu bar Skip to main content


Poster

Beyond Matryoshka: Revisiting Sparse Coding for Adaptive Representation

Tiansheng Wen · Yifei Wang · Zequn Zeng · Zhong Peng · Yudi Su · Xinyang Liu · Bo Chen · Hongwei Liu · Stefanie Jegelka · Chenyu You

East Exhibition Hall A-B #E-1705
[ ] [ ]
Wed 16 Jul 4:30 p.m. PDT — 7 p.m. PDT
 
Oral presentation: Oral 4A Representations 2
Wed 16 Jul 3:30 p.m. PDT — 4:30 p.m. PDT

Abstract:

Many large-scale systems rely on high-quality deep representations (embeddings) to facilitate tasks like retrieval, search, and generative modeling. Matryoshka Representation Learning (MRL) recently emerged as a solution for adaptive embedding lengths, but it requires full model retraining and suffers from noticeable performance degradations at short lengths. In this paper, we show that sparse coding offers a compelling alternative for achieving adaptive representation with minimal overhead and higher fidelity. We propose Contrastive Sparse Representation (CSR), a method that specifies pre-trained embeddings into a high-dimensional but selectively activated feature space. By leveraging lightweight autoencoding and task-aware contrastive objectives, CSR preserves semantic quality while allowing flexible, cost-effective inference at different sparsity levels. Extensive experiments on image, text, and multimodal benchmarks demonstrate that CSR consistently outperforms MRL in terms of both accuracy and retrieval speed—often by large margins—while also cutting training time to a fraction of that required by MRL. Our results establish sparse coding as a powerful paradigm for adaptive representation learning in real-world applications where efficiency and fidelity are both paramount. Code is available at this URL.

Lay Summary:

Modern AI systems rely on "embeddings" - digital representations that capture the meaning of images, text, or other data. These embeddings need to work efficiently across devices with different computing capabilities, from powerful servers to mobile phones.Our research introduces Contrastive Sparse Representation (CSR), a new technique that makes embeddings more adaptable without sacrificing quality. Unlike previous approaches that require complete retraining of AI models, CSR works with existing pre-trained embeddings and transforms them into a format where only the most important features are activated.Think of CSR like compressing a high-resolution photo: you can choose different compression levels depending on your needs, with each level preserving the most important visual information. Similarly, CSR allows AI systems to adjust embedding sizes based on available resources while maintaining accuracy.Our experiments with images, text, and combined data show that CSR outperforms previous methods in both accuracy and speed. It also requires significantly less training time, making it practical for real-world applications where both performance and efficiency matter.

Chat is not available.