Skip to yearly menu bar Skip to main content


Poster

Position: AI Should Not Be An Imitation Game: Centaur Evaluations

Andreas Haupt · Erik Brynjolfsson

East Exhibition Hall A-B #E-603
[ ] [ ]
Wed 16 Jul 4:30 p.m. PDT — 7 p.m. PDT

Abstract:

Benchmarks and evaluations are central to machine learning methodology and direct research in the field. Current evaluations commonly test systems in the absence of humans. This position paper argues that the machine learning community should increasingly use centaur evaluations, in which humans and AI jointly solve tasks. Centaur Evaluations refocus machine learning development toward human augmentation instead of human replacement, they allow for direct evaluation of human-centered desiderata, such as interpretability and helpfulness, and they can be more challenging and realistic than existing evaluations. By shifting the focus from automation toward collaboration between humans and AI, centaur evaluations can drive progress toward more effective and human-augmenting machine learning systems.

Lay Summary:

To make decisions on which Artificial Intelligence system (e.g., ChatGPT, Claude, or Gemini) to use for a task, we need to know which ones are good at the task at hand. Currently, many of these tasks test models on how they perform on human activities, such as solving mathematical problems, or summarization. We argue that we need to include humans in the evaluation, e.g., by letting many humans solve a writing or coding task together with different Artificial Intelligence models and comparing the outcomes.

Chat is not available.