Skip to yearly menu bar Skip to main content


Oral
in
Workshop: Multi-Agent Systems in the Era of Foundation Models: Opportunities, Challenges and Futures

AgentSafe: Benchmarking the Safety of Embodied Agents on Hazardous Instructions

Aishan Liu · Zonghao Ying · Le Wang · Junjie Mu · Jinyang Guo · Jiakai Wang · Yuqing Ma · Siyuan Liang · Mingchuan Zhang · Xianglong Liu · Dacheng Tao


Abstract:

The rapid advancement of vision-language models (VLMs) and their integration into embodied agents have unlocked powerful capabilities for decision-making. However, as these systems are increasingly deployed in real-world environments, they face mounting safety concerns, particularly when responding to hazardous instructions. In this work, we propose AgentSafe, the first comprehensive benchmark for evaluating the safety of embodied VLM agents under hazardous instructions. AgentSafe simulates realistic agent-environment interactions within a simulation sandbox and incorporates a novel adapter module that bridges the gap between high-level VLM outputs and low-level embodied controls. Specifically, it maps recognized visual entities to manipulable objects and translates abstract planning into executable atomic actions in the environment. Building on this, we construct a risk-aware instruction dataset inspired by Asimov’s Three Laws of Robotics, including base risky instructions and mutated jailbroken instructions. The benchmark includes 45 adversarial scenarios, 1,350 hazardous tasks, and 8,100 hazardous instructions, enabling systematic testing under adversarial conditions ranging from perception, planning, and action execution stages. Extensive experiments reveal that current embodied VLM agents are highly vulnerable to hazardous instructions and frequently violate safety principles, underscoring the need for rigorous safety evaluation.

Chat is not available.