Poster
HyperNear: Unnoticeable Node Injection Attacks on Hypergraph Neural Networks
Tingyi Cai · Yunliang Jiang · Ming Li · Lu Bai · Changqin Huang · Yi Wang
East Exhibition Hall A-B #E-3101
Modern AI tools are getting better at understanding complex systems, such as how people interact on social media or how diseases spread. A new kind of AI model, called a hypergraph neural network, is especially good at this. But we found that it may also be easier to fool than expected.Our research shows that by adding just a few fake data points, these systems can be misled into making wrong predictions. These fake points can be carefully crafted to blend in, making the attack hard to notice. We built a tool called HyperNear to study this issue. It creates smart, hidden attacks that work well even when the attacker doesn’t know much about the system.This is the first study to show how vulnerable these models can be in such situations. We hope our work will help researchers build more secure AI systems that are ready for the real world.