Poster
in
Workshop: The 2nd Workshop on Reliable and Responsible Foundation Models
SimBA: Simplifying Benchmark Analysis
Nishant Subramani · Alfredo Gomez · Mona Diab
Keywords: [ evaluation ] [ automatic evaluation of datasets ] [ benchmarking ] [ evaluation methodologies ] [ metrics ]
Modern language models are evaluated on large benchmarks, which are difficult to make sense of, especially for model selection.Looking at the raw evaluation numbers themselves using a model-centric lens, we propose SimBA a three phase framework to Simplify Benchmark Analysis. The three phases of SimBA are: stalk, where we conduct dataset & model comparisons, prowl, where we discover a representative subset, and pounce, where we use the representative subset to predict performance on a held-out set of models. Applying SimBA to three popular LM benchmarks: HELM, MMLU, and BigBenchLite reveals that across all three benchmarks, datasets and models relate strongly to one another (stalk). We develop an representative set discovery algorithm which covers a benchmark using raw evaluation scores alone. Using our algorithm, we find that with 5.9% (1/17), 1.7% (1/58), and 16.2% (12/74) of the datasets for HELM, MMLU, and BigBenchLite respectively, we achieve coverage levels of at least 95% (prowl). Additionally, using just these representative subsets, we can both preserve model ranks and predict performance on a held-out set of models with near zero mean-squared error (pounce). Taken together, SimBA can help model developers improve efficiency during model training and dataset creators validate whether their newly created dataset differs from existing datasets in a benchmark.