Skip to yearly menu bar Skip to main content


Spotlight Poster

Suitability Filter: A Statistical Framework for Classifier Evaluation in Real-World Deployment Settings

Angéline Pouget · Mohammad Yaghini · Stephan Rabanser · Nicolas Papernot

East Exhibition Hall A-B #E-504
[ ] [ ] [ Project Page ]
Thu 17 Jul 4:30 p.m. PDT — 7 p.m. PDT
 
Oral presentation: Oral 6D Evaluation
Thu 17 Jul 3:30 p.m. PDT — 4:30 p.m. PDT

Abstract:

Deploying machine learning models in safety-critical domains poses a key challenge: ensuring reliable model performance on downstream user data without access to ground truth labels for direct validation. We propose the suitability filter, a novel framework designed to detect performance deterioration by utilizing suitability signals—model output features that are sensitive to covariate shifts and indicative of potential prediction errors. The suitability filter evaluates whether classifier accuracy on unlabeled user data shows significant degradation compared to the accuracy measured on the labeled test dataset. Specifically, it ensures that this degradation does not exceed a pre-specified margin, which represents the maximum acceptable drop in accuracy. To achieve reliable performance evaluation, we aggregate suitability signals for both test and user data and compare these empirical distributions using statistical hypothesis testing, thus providing insights into decision uncertainty. Our modular method adapts to various models and domains. Empirical evaluations across different classification tasks demonstrate that the suitability filter reliably detects performance deviations due to covariate shift. This enables proactive mitigation of potential failures in high-stakes applications.

Lay Summary:

Machine learning models learn from data to make decisions, but it can be tricky to ensure they remain dependable when they encounter new, real-world situations. This research introduces a new way to check if these models are starting to make more mistakes with new data, particularly when we can't easily verify whether their decisions are correct. The method works by examining subtle clues in how the model behaves with both familiar and new data to detect if its decision-making quality has declined. Experiments showed this approach can successfully flag when a model is struggling because the new information is different from what it was prepared for. This helps build confidence that these machine learning models are working correctly and can be trusted, especially in important everyday applications.

Chat is not available.