Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on Technical AI Governance

Technical Requirements for Halting Dangerous AI Activities

Peter Barnett · Aaron Scher · David Abecassis


Abstract:

The rapid development of AI systems poses unprecedented risks, including loss of control, misuse, geopolitical instability, and concentration of power. To navigate these risks and avoid worst-case outcomes, governments may proactively establish the capability for a coordinated halt on dangerous AI development and deployment. In this paper, we outline key technical interventions that could allow for a coordinated halt on dangerous AI activities. We discuss how these interventions may contribute to restricting various dangerous AI activities, and show how these interventions can form the technical foundation for potential AI governance plans.

Chat is not available.