Skip to yearly menu bar Skip to main content


Poster

Chameleon: A Flexible Data-mixing Framework for Language Model Pretraining and Finetuning

Wanyun Xie · Francesco Tonin · Volkan Cevher

East Exhibition Hall A-B #E-2807
[ ] [ ]
Wed 16 Jul 11 a.m. PDT — 1:30 p.m. PDT

Abstract:

Training data mixtures greatly impact the generalization performance of large language models. Existing domain reweighting methods often rely on costly weight computations and require retraining when new data is introduced. To this end, we introduce a flexible and efficient data mixing framework, Chameleon, that employs leverage scores to quantify domain importance within a learned embedding space. We first construct a domain affinity matrix over domain embeddings. The induced leverage scores determine a mixture that upweights domains sharing common representations in embedding space. This formulation allows direct transfer to new data by computing the new domain embeddings. In experiments, we demonstrate improvements over three key scenarios: (i) our computed weights improve performance on pretraining domains with a fraction of the compute of existing methods; (ii) Chameleon can adapt to data changes without proxy retraining, boosting few-shot reasoning accuracies when transferred to new data; (iii) our method enables efficient domain reweighting in finetuning, consistently improving test perplexity on all finetuning domains over uniform mixture. Our code is available at https://github.com/LIONS-EPFL/Chameleon.

Lay Summary: Training large language models (LLMs) is heavily impacted by the composition of their training data. Existing methods for mixing data domains are often computationally expensive and impractical. We introduce Chameleon, a flexible and efficient data-mixing framework that quantifies domain importance using leverage scores within a learned embedding space. This approach ($i$) improves universal generalization, the fundamental goal of domain reweighting; ($ii$) adapts to domain modifications -- data naturally evolves between preparation and LLM training, making frequent recalibration impractical; ($iii$) handles different training stages including both pertaining and fine-tuning.

Chat is not available.