Skip to yearly menu bar Skip to main content


Poster

Can Classic GNNs Be Strong Baselines for Graph-level Tasks? Simple Architectures Meet Excellence

Yuankai Luo · Lei Shi · Xiao-Ming Wu

East Exhibition Hall A-B #E-3102
[ ] [ ] [ Project Page ]
Wed 16 Jul 11 a.m. PDT — 1:30 p.m. PDT

Abstract:

Message-passing Graph Neural Networks (GNNs) are often criticized for their limited expressiveness, issues like over-smoothing and over-squashing, and challenges in capturing long-range dependencies. Conversely, Graph Transformers (GTs) are regarded as superior due to their employment of global attention mechanisms, which potentially mitigate these challenges. Literature frequently suggests that GTs outperform GNNs in graph-level tasks, especially for graph classification and regression on small molecular graphs. In this study, we explore the untapped potential of GNNs through an enhanced framework, GNN+, which integrates six widely used techniques: edge feature integration, normalization, dropout, residual connections, feed-forward networks, and positional encoding, to effectively tackle graph-level tasks. We conduct a systematic re-evaluation of three classic GNNs—GCN, GIN, and GatedGCN—enhanced by the GNN+ framework across 14 well-known graph-level datasets. Our results reveal that, contrary to prevailing beliefs, these classic GNNs consistently match or surpass the performance of GTs, securing top-three rankings across all datasets and achieving first place in eight. Furthermore, they demonstrate greater efficiency, running several times faster than GTs on many datasets. This highlights the potential of simple GNN architectures, challenging the notion that complex mechanisms in GTs are essential for superior graph-level performance. Our source code is available at https://github.com/LUOyk1999/GNNPlus.

Lay Summary:

Graph-level tasks — such as predicting whether a molecule is toxic, or determining the function of a protein — are essential in fields like drug discovery and chemistry. While Graph Neural Networks (GNNs) are a popular tool for these tasks, many researchers now favor more complex models like Graph Transformers (GTs), which can capture long-range relationships using attention mechanisms.In this study, we revisit the potential of classic GNNs for graph-level tasks. We introduce GNN+, a framework that enhances classic GNNs by incorporating six widely used techniques, including edge feature integration, normalization, dropout, residual connections, feed-forward networks, and positional encoding. We then re-evaluate 3 classic GNNs (GCN, GIN, and GatedGCN) on 14 well-known graph-level datasets across both classification and regression tasks.Contrary to common belief, we find that these classic GNNs often match or even outperform GTs, and do so with significantly higher efficiency. These findings show that simple, well-tuned GNNs remain powerful tools for graph-level learning, challenging the assumption that complex architectures are always better.The code is open-source at: https://github.com/LUOyk1999/GNNPlus.

Chat is not available.