Poster
Rectifying Conformity Scores for Better Conditional Coverage
Vincent Plassier · Alexander Fishkov · Victor Dheur · Mohsen Guizani · Souhaib Ben Taieb · Maxim Panov · Eric Moulines
West Exhibition Hall B2-B3 #W-1016
We present a new method for generating confidence sets within the split conformal prediction framework. Our method performs a trainable transformation of any given conformity score to improve conditional coverage while ensuring exact marginal coverage. The transformation is based on an estimate of the conditional quantile of conformity scores. The resulting method is particularly beneficial for constructing adaptive confidence sets in multi-output problems where standard conformal quantile regression approaches have limited applicability. We develop a theoretical bound that captures the influence of the accuracy of the quantile estimate on the approximate conditional validity, unlike classical bounds for conformal prediction methods that only offer marginal coverage. We experimentally show that our method is highly adaptive to the local data structure and outperforms existing methods in terms of conditional coverage, improving the reliability of statistical inference in various applications.
When artificial intelligence (AI) models make predictions—such as forecasting the weather, diagnosing diseases, or making financial decisions—it’s important not just to get a single answer, but to know how confident the model is in that answer. This is where “prediction intervals” come in: they provide a range of possible outcomes that are likely to contain the true answer.A popular technique called conformal prediction already does this. It can guarantee that, on average, the true answer falls inside the prediction interval a desired percentage of the time (say, 90%). However, these guarantees are “marginal”—they only hold when averaged across all situations. They don't adjust well for different types of inputs. For instance, the model might be very accurate on common cases but less reliable on unusual ones, yet treat them the same.This paper introduces a new method that improves how these confidence intervals adapt to different situations. The authors developed a technique that tweaks the model’s internal uncertainty scores in a smart, data-driven way. By doing this, their method keeps the original guarantees while making the intervals better suited to the specific conditions of each case—like giving tighter estimates when the model is more certain and wider ones when it’s less sure.The authors show that this approach is especially helpful for complex tasks where the model has to predict multiple values at once. In tests on both simulated and real-world datasets, their method provided more reliable predictions than existing tools.