Poster
in
Workshop: 3rd Workshop on High-dimensional Learning Dynamics (HiLD)
Selective Prediction via Training Dynamics
Stephan Rabanser · Anvith Thudi · Kimia Hamidieh · Adam Dziedzic · Israfil Bahceci · Akram Bin Sediq · Hamza Sokun · Nicolas Papernot
Selective prediction aims to reject inputs a model is likely to misclassify, balancing input coverage (how many points are accepted) with utility (performance on accepted inputs). Existing methods often modify model architectures or objectives, limiting practical use and introducing unwanted interactions with existing losses. In contrast, we show that state-of-the-art performance can be achieved by analyzing a model’s discretized training dynamics. Our framework monitors the instability of intermediate checkpoint predictions relative to the final model and rejects inputs with excessive late-stage disagreement. The approach is domain-agnostic, requires no train-time changes, and can be combined with existing methods. Experiments across image, regression, and time series tasks show that our method outperforms prior state-of-the-art utility–coverage trade-offs.