Skip to yearly menu bar Skip to main content


Contributed talk
in
Workshop: 3rd Workshop on High-dimensional Learning Dynamics (HiLD)

Emanuele Troiani (EPFL) Bayes optimal learning of attention-indexed models

[ ]
Fri 18 Jul 2:45 p.m. PDT — 3 p.m. PDT

Abstract:

We introduce the attention-indexed model (AIM), a theoretical framework for analyzing learning in deep attention layers. Inspired by multi-index models, AIM captures how token-level outputs emerge from layered bilinear interactions over high-dimensional embeddings. Unlike prior tractable attention models, AIM allows full-width key and query matrices, aligning more closely with practical transformers. Using tools from statistical mechanics and random matrix theory, we derive closed-form predictions for Bayes-optimal generalization error and identify sharp phase transitions as a function of sample complexity, model width, and sequence length. We propose a matching approximate message passing algorithm and show that gradient descent can reach optimal performance. AIM offers a solvable playground for understanding learning in modern attention architectures.

Chat is not available.