Poster
A Rescaling-Invariant Lipschitz Bound Based on Path-Metrics for Modern ReLU Network Parameterizations
Antoine Gonon · Nicolas Brisebarre · Elisa Riccietti · Rémi Gribonval
West Exhibition Hall B2-B3 #W-807
Imagine adjusting the settings on a complex sound system with countless knobs and switches. In the world of artificial intelligence, these "knobs" are the internal settings of neural networks. People often tweak them to make AI models more efficient. (e.g., faster and/or cheaper to use). However, even small adjustments can sometimes lead to unexpected changes in how the AI behaves. Traditional methods to check this stability often don’t apply to modern, complex AI models, and can even predict highly pessimistic instability—even for changes that are known to be harmless.Our research introduces a new method that extends stability checks to modern AI architectures and can significantly reduce these misleading warnings. In particular, it ensures that harmless knob changes can’t trigger overly negative predictions about the model’s behavior. We also show that this method can help simplify AI models by safely removing unneeded parts, without sacrificing performance. This opens the door to more reliable and efficient AI systems in real-world use.