Poster
Pruning for GNNs: Lower Complexity with Comparable Expressiveness
Dun Ma · Jianguo Chen · Wenguo Yang · Suixiang Gao · Shengminjie Chen
East Exhibition Hall A-B #E-3003
In recent years, the pursuit of higher expressive power in graph neural networks (GNNs) has often led to more complex aggregation mechanisms and deeper architectures. To address these issues, we have identified redundant structures in GNNs, and by pruning them, we propose Pruned MP-GNNs, K-Path GNNs, and K-Hop GNNs based on their original architectures. We show that 1) Although some structures are pruned in Pruned MP-GNNs and Pruned K-Path GNNs, their expressive power has not been compromised. 2) K-Hop MP-GNNs and their pruned architecture exhibit equivalent expressiveness on regular and strongly regular graphs. 3) The complexity of pruned K-Path GNNs and pruned K-Hop GNNs is lower than that of MP-GNNs, yet their expressive power is higher. Experimental results validate our refinements, demonstrating competitive performance across benchmark datasets with improved efficiency.
Graph neural networks (GNNs) are powerful tools that help computers understand complex connections, like social networks or molecules. But to make them more accurate, researchers often add more layers and features — which also makes them slower and harder to train.In our work, we asked: can we make GNNs simpler without losing their ability to understand complex structures? We discovered that many parts of GNNs are redundant, and by carefully removing them, the expressive power of pruned GNN remains unchangedThis makes GNNs more practical for real-world applications, especially where computing power is limited — like mobile devices, or analyzing very large graphs.