Skip to yearly menu bar Skip to main content


Poster

Positional Attention: Expressivity and Learnability of Algorithmic Computation

Artur Back de Luca · George Giapitzakis · Shenghao Yang · Petar Veličković · Kimon Fountoulakis

East Exhibition Hall A-B #E-2304
[ ] [ ]
Thu 17 Jul 11 a.m. PDT — 1:30 p.m. PDT

Abstract:

There is a growing interest in the ability of neural networks to execute algorithmic tasks (e.g., arithmetic, summary statistics, and sorting).The goal of this work is to better understand the role of attention in Transformers for algorithmic execution. Its importance for algorithmic execution has been studied theoretically and empirically using parallel computational models. Notably, many parallel algorithms communicate between processors solely using positional information. Inspired by this observation, we investigate how Transformers can execute algorithms using positional attention, where attention weights depend exclusively on positional encodings. We prove that Transformers with positional attention (positional Transformers) maintain the same expressivity of parallel computational models, incurring a logarithmic depth cost relative to the input length. We analyze their in-distribution learnability and explore how parameter norms in positional attention affect sample complexity. Our results show that positional Transformers introduce a learning trade-off: while they exhibit better theoretical dependence on parameter norms, certain tasks may require more layers, which can, in turn, increase sample complexity. Finally, we empirically explore the out-of-distribution performance of positional Transformers and find that they perform well in tasks where their underlying algorithmic solution relies on positional information.

Lay Summary:

Transformers, a powerful type of neural network, can perform complex tasks like solving math problems or sorting numbers. But how exactly do they achieve this? In our study, we explore the role of the attention mechanism—a core part of Transformers—by analyzing a simplified version called positional attention, where attention depends only on the positions of inputs, not their actual values. This approach reflects how some classical parallel algorithms work. We find that Transformers using positional attention can still perform complex computations and offer theoretical advantages in how they learn. Our experiments show that this simplified attention can generalize better in tasks where position matters more than content. This helps us understand when and how attention enables Transformers to solve complex tasks.

Chat is not available.