Poster
Aggregation Buffer: Revisiting DropEdge with a New Parameter Block
Dooho Lee · Myeong Kong · Sagad Hamid · Cheonwoo Lee · Jaemin Yoo
East Exhibition Hall A-B #E-3100
We revisit DropEdge, a data augmentation technique for GNNs which randomly removes edges to expose diverse graph structures during training. While being a promising approach to effectively reduce overfitting on specific connections in the graph, we observe that its potential performance gain in supervised learning tasks is significantly limited. To understand why, we provide a theoretical analysis showing that the limited performance of DropEdge comes from the fundamental limitation that exists in many GNN architectures.Based on this analysis, we propose Aggregation Buffer, a parameter block specifically designed to improve the robustness of GNNs by addressing the limitation of DropEdge. Our method is compatible with any GNN model, and shows consistent performance improvements on multiple datasets. Moreover, our method effectively addresses well-known problems such as degree bias or structural disparity as a unifying solution. Code and datasets are available at https://github.com/dooho00/agg-buffer.
Randomly removing parts of the input is a common way to help machine learning models handle data variation. In graph-structured data—like social networks, where items are linked by edges—a technique called “DropEdge” removes some connections during training to improve reliability.However, we observe that its effectiveness is significantly limited in practice. Our analysis reveals that the issue lies in how graph models aggregate information from connected nodes. To address this, we introduce a post-training component called the “Aggregation Buffer.” We attach it to a trained model to improve its ability to handle varying connection patterns. In tests on 12 datasets, it consistently and significantly improves performance.Our work highlights the importance of edge-robustness—an often-overlooked issue—and offers a simple yet effective way to enhance graph models after training.