Poster
Beyond Message Passing: Neural Graph Pattern Machine
Zehong Wang · Zheyuan Zhang · Tianyi MA · Nitesh Chawla · Chuxu Zhang · Yanfang Ye
East Exhibition Hall A-B #E-3105
We introduce a novel approach to graph representation learning that avoids the limitations of traditional message passing in graph neural networks (GNNs). Instead of iteratively aggregating information from neighboring nodes, GPM directly learns from meaningful substructures---like triangles, cliques, and cycles---that often determine key properties in graphs, such as molecular rings or social triads. The model samples these patterns using random walks, encodes them with sequential models, and identifies the most relevant ones using a transformer-based attention mechanism. This design enables GPM to capture both local and long-range dependencies more effectively than standard GNNs. Extensive experiments across node, link, and graph-level tasks show that GPM consistently outperforms state-of-the-art baselines in accuracy, robustness to distribution shifts, and scalability to large graphs. Furthermore, GPM provides enhanced interpretability by highlighting the dominant patterns driving its predictions. While the current method relies on random sampling, which may introduce inefficiencies, the framework opens the door to more expressive and pattern-aware graph learning, with potential extensions to unsupervised learning, integration with large language models, and applications in complex domains such as drug discovery and social systems.