Skip to yearly menu bar Skip to main content


Oral
in
Workshop: DataWorld: Unifying data curation frameworks across domains

Evaluating Sample Utility for Efficient Data Selection by Mimicking Model Weights

Tzu-Heng Huang · Manjot Bilkhu · Frederic Sala

Keywords: [ Data Selection ] [ Data Quality Metric ] [ Data Curation ]

[ ] [ Project Page ]
Sat 19 Jul 2:30 p.m. PDT — 2:45 p.m. PDT

Abstract:

Multimodal models are trained on large-scale web-crawled datasets, which often contain noise, bias, and irrelevant information. This motivates the use of data selection techniques, which can be divided into model-free variants, relying on heuristic rules and downstream datasets, and model-based approaches, such as those using influence functions. The former can be expensive to design and risks introducing unwanted dataset dependencies, while the latter are often computationally prohibitive. In this work, we propose an efficient, model-based approach using the Mimic Score, a new data-quality metric that leverages the weights of a reference model to assess the usefulness of individual samples for training a new model. Our method relies on measuring alignments between training gradients and a target direction induced by this reference model. Building on the derived mimic scores, we develop Grad-Mimic: a framework that prioritizes samples to learn, estimates overall sample utility, and creates effective filters. Empirically, using mimic scores to guide training improves data efficiency, accelerates convergence, yields consistent performance gains across six image datasets, and enhances CLIP models with 20.7% fewer training steps. Moreover, mimic score-based filters complement existing filtering methods, e.g., training improved CLIP models with 4.7 million fewer samples while offering accurate estimation of dataset quality.

Chat is not available.