Skip to yearly menu bar Skip to main content


Spotlight Talk
in
Workshop: Workshop on Technical AI Governance

Distributed and Decentralised Training: Technical Governance Challenges in a Shifting AI Landscape

Jakub Kryƛ · Yashvardhan Sharma · Janet Egan

[ ] [ Project Page ]
Sat 19 Jul 4 p.m. PDT — 4:10 p.m. PDT

Abstract: Advances in low-communication training algorithms are enabling a shift from centralised model training to compute setups that are either distributed across multiple clusters or decentralised via community-driven contributions. This paper distinguishes these two scenarios $-$ distributed and decentralised training $-$ which are little understood and often conflated in policy discourse. We discuss how they could impact technical AI governance through an increased risk of compute structuring, capability proliferation, and the erosion of detectability and shutdownability. While these trends foreshadow a possible new paradigm that could challenge key assumptions of compute governance, we emphasise that certain policy levers, like export controls, remain relevant and effective. We also acknowledge potential benefits of decentralised AI, including privacy-preserving training runs that could unlock access to more data, and mitigating harmful power concentration. Our goal is to support more precise policymaking around compute, capability proliferation, and decentralised AI development.

Chat is not available.