Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on Technical AI Governance

Compute Requirements for Algorithmic Innovation in Frontier AI Models

Peter Barnett


Abstract:

Algorithmic innovation in the pretraining of large language models has driven a massive reduction in the total compute required to reach a given level of capability. In this paper, we catalog 36 pre-training algorithmic innovations used in LLaMA 3 and DeepSeek-V3. For each innovation we estimate both the total FLOP used in development and the FLOP/s of the hardware utilized. Innovations using significant resources double in their requirements each year. We then use this dataset to investigate the effect of compute caps on innovation, finding that compute caps alone are unlikely to dramatically slow algorithmic progress.

Chat is not available.