Skip to yearly menu bar Skip to main content


Spotlight Poster

Conformal Prediction as Bayesian Quadrature

Jake Snell · Thomas Griffiths

East Exhibition Hall A-B #E-1305
Outstanding Paper Outstanding Paper
[ ] [ ] [ Project Page ]
Wed 16 Jul 4:30 p.m. PDT — 7 p.m. PDT
 
Oral presentation: Oral 4C Privacy and Uncertainty Quantification
Wed 16 Jul 3:30 p.m. PDT — 4:30 p.m. PDT

Abstract:

As machine learning-based prediction systems are increasingly used in high-stakes situations, it is important to understand how such predictive models will perform upon deployment. Distribution-free uncertainty quantification techniques such as conformal prediction provide guarantees about the loss black-box models will incur even when the details of the models are hidden. However, such methods are based on frequentist probability, which unduly limits their applicability. We revisit the central aspects of conformal prediction from a Bayesian perspective and thereby illuminate the shortcomings of frequentist guarantees. We propose a practical alternative based on Bayesian quadrature that provides interpretable guarantees and offers a richer representation of the likely range of losses to be observed at test time.

Lay Summary:

Machine learning can be used to make predictions in high-stakes settings. In these settings, we want to decide if we should use a particular prediction algorithm or not. We can first measure the performance of the algorithm on some data. Then we can estimate whether the algorithm is appropriate based on this. Previous methods to do this estimation have two main issues. Some require strong assumptions about the algorithm itself. Others produce only a single estimate rather than a range of possibilities. We show how to produce a range of plausible estimates without strong assumptions. In this way, we can better decide whether we can safely use the algorithm.

Chat is not available.