Poster
Improving the Statistical Efficiency of Cross-Conformal Prediction
Matteo Gasparin · Aaditya Ramdas
East Exhibition Hall A-B #E-1401
Imagine a tool that not only makes predictions but also tells you how confident it is in those predictions. Conformal prediction methods do exactly that. Instead of providing a single outcome, they return a set of possible values that is guaranteed to contain the correct answer with a specified level of confidence. This allows users to better understand the uncertainty behind a machine learning model’s output in a clear and reliable way.This research focuses on enhancing a particular version of this approach known as cross-conformal prediction. While this method already produces prediction intervals that meet the desired coverage --- meaning they include the true value with high probability --- it can sometimes be inefficient, resulting in prediction sets that are unnecessarily wide. We introduce new variants that preserve the same level of reliability while producing narrower and more informative sets. These improvements result from more efficient methods for combining p-values, which are a fundamental tool in statistical inference.In practical terms, these advancements make cross-conformal prediction more usable by providing tighter prediction sets. This can be especially valuable in real-world applications, such as industrial settings, where thousands of predictions are made each day and decision-making depends on accurate information.