Poster
Positional Encoding meets Persistent Homology on Graphs
Yogesh Verma · Amauri Souza · Vikas Garg
East Exhibition Hall A-B #E-2806
The local inductive bias of message-passing graph neural networks (GNNs) hampers their ability to exploit key structural information (e.g., connectivity and cycles). Positional encoding (PE) and Persistent Homology (PH) have emerged as two promising approaches to mitigate this issue. PE schemes endow GNNs with location-aware features, while PH methods enhance GNNs with multiresolution topological features. However, a rigorous theoretical characterization of the relative merits and shortcomings of PE and PH has remained elusive. We bridge this gap by establishing that neither paradigm is more expressive than the other, providing novel constructions where one approach fails but the other succeeds. Our insights inform the design of a novel learnable method, PiPE (Persistence-informed Positional Encoding), which is provably more expressive than both PH and PE. PiPE demonstrates strong performance across a variety of tasks (e.g., molecule property prediction, graph classification, and out-of-distribution generalization), thereby advancing the frontiers of graph representation learning. Code is available at https://github.com/Aalto-QuML/PIPE
Graph Neural Networks (GNNs) are powerful tools for learning from complex data like molecules or social networks, but they often miss key structural patterns. Two popular approaches to mitigate this issue are positional encodings (PE) and persistent homology (PH)—but it’s unclear which is better. We show that PE and PH are incomparable: each can detect graph patterns that the other misses. Using this insight, we propose PiPE (Persistence-informed Positional Encoding), a new method that combines the strengths of both approaches. Our method outperforms existing techniques on real-world tasks like drug molecule property prediction, graph classification, and out-of-distribution generalization.