Poster
Pixel-level Certified Explanations via Randomized Smoothing
Alaa Anani · Tobias Lorenz · Mario Fritz · Bernt Schiele
East Exhibition Hall A-B #E-2400
When AI models make decisions, like identifying objects in images, we often try to understand why by looking at which parts of the image influenced the prediction. But these explanations can be unreliable: even tiny, invisible changes to the image can completely change what the AI says is important, even though its answer doesn’t change. We’ve developed a new method that makes these explanations much more stable and trustworthy. It works with any existing explanation technique and shows which parts of an image truly matter, even if the image is slightly altered. We also created new ways to measure how reliable and useful these explanations are. Our tests on many AI models show that our approach makes AI explanations clearer, more consistent, and safer to use in real applications.