Poster
SafeArena: Evaluating the Safety of Autonomous Web Agents
Ada Tur · Nicholas Meade · Xing Han Lù · Alejandra Zambrano · Arkil Patel · Esin Durmus · Spandana Gella · Karolina Stanczak · Siva Reddy
East Exhibition Hall A-B #E-701
LLM-based agents are becoming increasingly proficient at solving web-based tasks. With this capability comes a greater risk of misuse for malicious purposes, such as posting misinformation in an online forum or selling illicit substances on a website. To evaluate these risks, we propose SafeArena, a benchmark focused on the deliberate misuse of web agents. SafeArena comprises 250 safe and 250 harmful tasks across four websites. We classify the harmful tasks into five harm categories---misinformation, illegal activity, harassment, cybercrime, and social bias, designed to assess realistic misuses of web agents. We evaluate leading LLM-based web agents, including GPT-4o, Claude-3.5 Sonnet, Qwen-2-VL 72B, and Llama-3.2 90B, on our benchmark. To systematically assess their susceptibility to harmful tasks, we introduce the Agent Risk Assessment framework that categorizes agent behavior across four risk levels. We find agents are surprisingly compliant with malicious requests, with GPT-4o and Qwen-2 completing 34.7% and 27.3% of harmful requests, respectively. Our findings highlight the urgent need for safety alignment procedures for web agents.
As AI agents get better at using the web—browsing websites, filling out forms, and posting content—they also introduce new safety risks. What if someone asked them to do something harmful, like spreading misinformation or harassing someone on an online forum?To better understand these safety risks, we created SafeArena, a test suite of 500 web tasks: half harmless, and half intentionally harmful. These harmful tasks span realistic threats like misinformation, online harassment, cybercrime, and more. We used this benchmark to evaluate some of today’s latest AI web agents, including agents backed by GPT-4o and Claude 3.5.We found that even top-performing agents often followed harmful instructions—GPT-4o completed over a third of our malicious tasks. To analyze this behavior systematically, we developed a framework that categorizes web agent behaviour by risk level. Our findings highlight an urgent need for stronger safety alignment, especially as these agents become more capable and widely deployed. By introducing SafeArena, we aim to provide a crucial benchmark to support and accelerate on-going efforts to design safe and aligned web agents.