Invited Speaker
in
Workshop: Workshop on Technical AI Governance
In-House Evaluation Is Not Enough: Towards Robust Third-Party Flaw Disclosure for General-Purpose AI
Shayne Longpre
The widespread deployment of general-purpose AI (GPAI) systems introduces significant new risks. Based on a collaboration between experts from the fields of software security, machine learning, law, social science, and policy, we design and propose new flaw reporting and coordination measures for GPAI systems, including flaw report forms designed for rapid triaging, AI bug bounty programs, and coordination centers for universally transferable flaws, that may pertain to many developers at once. By promoting robust reporting and coordination in the AI ecosystem, these proposals could significantly improve the safety, security, and accountability of GPAI systems.The widespread deployment of general-purpose AI (GPAI) systems introduces significant new risks. Based on a collaboration between experts from the fields of software security, machine learning, law, social science, and policy, we design and propose new flaw reporting and coordination measures for GPAI systems, including flaw report forms designed for rapid triaging, AI bug bounty programs, and coordination centers for universally transferable flaws, that may pertain to many developers at once. By promoting robust reporting and coordination in the AI ecosystem, these proposals could significantly improve the safety, security, and accountability of GPAI systems.