Skip to yearly menu bar Skip to main content


Poster
in
Workshop: DIG-BUGS: Data in Generative Models (The Bad, the Ugly, and the Greats)

A Representation Engineering Perspective on the Effectiveness of Multi-Turn Jailbreaks

Blake Bullwinkel · Mark Russinovich · Ahmed Salem · Santiago Zanella-Beguelin · Dan Jones · Giorgio Severi · Eugenia Kim · Keegan Hines · Amanda Minnich · Yonatan Zunger · Ram Shankar Siva Kumar

Keywords: [ AI security ] [ LLM interpretability ] [ AI safety ] [ LLM jailbreaks ]

[ ] [ Project Page ]
Sat 19 Jul 3 p.m. PDT — 3:45 p.m. PDT

Abstract:

Recent research has demonstrated that state-of-the-art LLMs and defenses remain susceptible to multi-turn jailbreak attacks. These attacks require only closed-box model access and are often easy to perform manually, posing a significant threat to the safe and secure deployment of LLM-based systems. We study the effectiveness of the Crescendo multi-turn jailbreak at the level of intermediate model representations and find that safety-aligned LMs often represent Crescendo responses as more benign than harmful, especially as the number of conversation turns increases. Our analysis indicates that at each turn, Crescendo prompts tend to keep model outputs in a "benign" region of representation space, effectively tricking the model into fulfilling harmful requests. Further, our results help explain why single-turn jailbreak defenses like circuit breakers are generally ineffective against multi-turn attacks, motivating the development of mitigations that address this generalization gap.

Chat is not available.