Skip to yearly menu bar Skip to main content


Poster

Beyond Message Passing: Neural Graph Pattern Machine

Zehong Wang · Zheyuan Zhang · Tianyi MA · Nitesh Chawla · Chuxu Zhang · Yanfang Ye

East Exhibition Hall A-B #E-3105
[ ] [ ]
Thu 17 Jul 11 a.m. PDT — 1:30 p.m. PDT

Abstract: Graph learning tasks often hinge on identifying key substructure patterns---such as triadic closures in social networks or benzene rings in molecular graphs---that underpin downstream performance. However, most existing graph neural networks (GNNs) rely on message passing, which aggregates local neighborhood information iteratively and struggles to explicitly capture such fundamental motifs, like triangles, $k$-cliques, and rings. This limitation hinders both expressiveness and long-range dependency modeling. In this paper, we introduce the Neural Graph Pattern Machine (GPM), a novel framework that bypasses message passing by learning directly from graph substructures. GPM efficiently extracts, encodes, and prioritizes task-relevant graph patterns, offering greater expressivity and improved ability to capture long-range dependencies. Empirical evaluations across four standard tasks---node classification, link prediction, graph classification, and graph regression---demonstrate that GPM outperforms state-of-the-art baselines. Further analysis reveals that GPM exhibits strong out-of-distribution generalization, desirable scalability, and enhanced interpretability. Code and datasets are available at: https://github.com/Zehong-Wang/GPM.

Lay Summary:

We introduce a novel approach to graph representation learning that avoids the limitations of traditional message passing in graph neural networks (GNNs). Instead of iteratively aggregating information from neighboring nodes, GPM directly learns from meaningful substructures---like triangles, cliques, and cycles---that often determine key properties in graphs, such as molecular rings or social triads. The model samples these patterns using random walks, encodes them with sequential models, and identifies the most relevant ones using a transformer-based attention mechanism. This design enables GPM to capture both local and long-range dependencies more effectively than standard GNNs. Extensive experiments across node, link, and graph-level tasks show that GPM consistently outperforms state-of-the-art baselines in accuracy, robustness to distribution shifts, and scalability to large graphs. Furthermore, GPM provides enhanced interpretability by highlighting the dominant patterns driving its predictions. While the current method relies on random sampling, which may introduce inefficiencies, the framework opens the door to more expressive and pattern-aware graph learning, with potential extensions to unsupervised learning, integration with large language models, and applications in complex domains such as drug discovery and social systems.

Chat is not available.