Skip to yearly menu bar Skip to main content


Invited Speaker
in
Workshop: Workshop on Technical AI Governance

Technical AI Governance in Practice: What Tools Miss, and Where We Go Next

Victor Ojewale

[ ]
Sat 19 Jul 2:30 p.m. PDT — 3 p.m. PDT

Abstract:

Audits are increasingly used to identify risks in deployed AI systems, but current audit tooling often falls short by focusing narrowly on evaluation while neglecting key needs like harms discovery, audit communication, and support for advocacy. Based on interviews with 35 practitioners and a landscape analysis of over 400 tools, I outline how this limited scope hinders effective accountability. Yet even where tools do focus on evaluation, they often rely on monolingual and decontextualized methods that fail to capture real-world model behaviour. I illustrate this through a case study on multilingual evaluation, where we developed functional benchmarks in six languages. These benchmarks reveal significant cross-linguistic fragility in LLM performance and underscore the risks of governance frameworks that assume language-agnostic capability. Together, these findings point to the need for a more expansive vision of technical governance that centers contextual robustness, and the infrastructural conditions for meaningful accountability.

Chat is not available.