Skip to yearly menu bar Skip to main content


Poster

The Elicitation Game: Evaluating Capability Elicitation Techniques

Felix Hofstätter · Teun van der Weij · Jayden Teoh · Rada Djoneva · Henning Bartsch · Francis Rhys Ward

East Exhibition Hall A-B #E-1006
[ ] [ ]
Wed 16 Jul 11 a.m. PDT — 1:30 p.m. PDT

Abstract:

Capability evaluations are required to understand and regulate AI systems that maybe deployed or further developed. Therefore, it is important that evaluations providean accurate estimation of an AI system’s capabilities. However, in numerous cases,previously latent capabilities have been elicited from models, sometimes longafter initial release. Accordingly, substantial efforts have been made to developmethods for eliciting latent capabilities from models. In this paper, we evaluate theeffectiveness of capability elicitation techniques by intentionally training modelorganisms – language models with hidden capabilities that are revealed by apassword. We introduce a novel method for training model organisms, basedon circuit-breaking, which is more robust to elicitation techniques than standardpassword-locked models. We focus on elicitation techniques based on promptingand activation steering, and compare these to fine-tuning methods. Promptingtechniques can elicit the actual capability of both password-locked and circuit-broken model organisms in an MCQA setting, while steering fails to do so. Fora code-generation task, only fine-tuning can elicit the hidden capabilities of ournovel model organism. Additionally, our results suggest that combining techniquesimproves elicitation. Still, if possible, fine-tuning should be the method of choiceto improve the trustworthiness of capability evaluations.

Lay Summary:

Testing Hidden AI Capabilities: Can We Find What AI Systems Are Really Capable Of?When we build AI systems, we need to understand what they can actually do – both for safety and to make good decisions about how to use them. However, AI systems sometimes have hidden abilities that only show up later, which can be a problem if we've underestimated what they're capable of.To address this challenge, researchers have developed techniques to try to uncover these hidden capabilities. But how well do these techniques actually work? This study tested different methods by creating special AI models with intentionally hidden abilities – like having a secret password that unlocks extra skills.We created two types of "model organisms" (test AI systems): simple password-protected models and more sophisticated ones using a technique called "circuit-breaking" that makes the hidden abilities harder to find. We then tried various methods to uncover these hidden capabilities, including different prompting strategies, a technique called activation steering, and retraining the models.The results showed that simple prompting could reveal hidden abilities in multiple-choice question tasks, but steering techniques failed. For programming tasks, only retraining the AI model could uncover the hidden capabilities of our more sophisticated test models. Combining multiple techniques generally worked better than using just one.These findings suggest that if we want to be confident about what an AI system can do, we should use multiple evaluation methods, with retraining being the most reliable approach. This research helps make AI capability testing more trustworthy and thorough.

Chat is not available.