Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on Technical AI Governance

A Conceptual Framework for AI Capability Evaluations

MarĂ­a Carro · Denise Mester · Francisca Selasco · Luca Gangi · Matheo Musa · Lola Pereyra · Mario Leiva · Juan Corvalan · Maria Vanina Martinez · Gerardo Simari


Abstract:

As AI systems advance and integrate into society, well-designed and transparent evaluations are becoming essential tools in AI governance, informing decisions by providing evidence about system capabilities and risks. Yet there remains a lack of clarity on how to perform these assessments both comprehensively and reliably. To address this gap, we propose a conceptual framework for analyzing AI capability evaluations, offering a structured, descriptive approach that systematizes the analysis of widely used methods and terminology without imposing new taxonomies or rigid formats. This framework supports transparency, comparability, and interpretability across diverse evaluations. It also enables researchers to identify methodological weaknesses, assists practitioners in designing evaluations, and provides policymakers with an accessible tool to scrutinize, compare, and navigate complex evaluation landscapes.

Chat is not available.