Skip to yearly menu bar Skip to main content


Spotlight Talk
in
Workshop: Workshop on Technical AI Governance

Deprecating Benchmarks: Criteria and Framework

Ayrton San Joaquin · Rokas Gipiškis · Leon Staufer · Ariel Gil

[ ] [ Project Page ]
Sat 19 Jul 10:40 a.m. PDT — 10:50 a.m. PDT

Abstract: As frontier artificial intelligence (AI) models rapidly advance, benchmarks are integral to comparing different models and measuring their progress in different task-specific domains. However, there is a lack of guidance on ${when}$ and ${how}$ benchmarks should be deprecated once they cease to effectively perform their purpose. This risks benchmark scores over-valuing model capabilities at best, and safety-washing or hiding capabilities at worst. Based on a review of benchmarking practices, we propose criteria to decide when to fully or partially deprecate benchmarks, and a framework for deprecating benchmarks. Our work aims to advance the state of benchmarking towards rigorous and quality evaluations, especially for frontier models, and our recommendations are aimed to benefit benchmark developers, benchmark users, AI governance actors (across governments, academia, and industry panels), and policy makers.

Chat is not available.