Poster
COMRECGC: Global Graph Counterfactual Explainer through Common Recourse
Gregoire Fournier · Sourav Medya
East Exhibition Hall A-B #E-3105
Graph neural networks (GNNs) have been widely used in various domains such as social networks, molecular biology, or recommendation systems. Concurrently, different explanations methods of GNNs have arisen to complement its blackbox nature. Explanations of the GNNs’ predictions can be categorized into two types—factual and counterfactual. Given a GNN trained on binary classification into “accept” and “reject” classes, a global counterfactual explanation consists in generating a small set of “accept” graphs relevant to all of the input “reject” graphs. The transformation of a “reject” graph into an “accept” graph is called a recourse. A common recourse explanation is a small set of recourse, from which every “reject” graph can be turned into an “accept” graph. Although local counterfactual explanations have been studied extensively, the problem of finding common recourse for global counterfactual explanation remains unexplored, particularly for GNNs. In this paper, we formalize the common recourse explanation problem, and design an effective algorithm, COMRECGC, to solve it. We benchmark our algorithm against strong baselines on four different real-world graphs datasets and demonstrate the superior performance of COMRECGC against the competitors. We also compare the common recourse explanations to the graph counterfactual explanation, showing that common recourse explanations are either comparable or superior, making them worth considering for applications such as drug discovery or computational biology.
Understanding and Improving AI Decisions in NetworksArtificial intelligence (AI) tools called graph neural networks (GNNs) are used to make decisions in many areas, such as figuring out how people connect on social media, predicting how molecules behave in medicine, or helping recommend products to users. But even though these tools can be very accurate, they often work like a “black box”—we see what decision they made, but we don’t know why.To make these decisions more understandable, researchers are developing ways to explain them. One kind of explanation looks at what small changes would make the AI change its mind—for example, what changes would cause it to approve something it originally rejected. These are called "counterfactual explanations."In this paper, the authors go a step further. Instead of looking at each decision separately, they ask: Can we find a small number of helpful changes that work across many rejected cases to turn them into accepted ones? Imagine finding just a few tweaks that could improve lots of different things at once—that’s what they call "common recourse."The researchers create a method, called COMRECGC, to find these useful common changes. They test it on several real-world problems and show that it works better than other existing methods. This approach could be especially helpful in fields like drug development or biology, where figuring out small changes that make a big difference could save time, money, and lives.