Skip to yearly menu bar Skip to main content


Oral

Rényi Neural Processes

Xuesong Wang · He Zhao · Edwin V. Bonilla

West Ballroom B
[ ] [ Visit Oral 5C Probablistic Models ]
Thu 17 Jul 10 a.m. — 10:15 a.m. PDT

Abstract:

Neural Processes (NPs) are deep probabilistic models that represent stochastic processes by conditioning their prior distributions on a set of context points. Despite their advantages in uncertainty estimation for complex distributions, NPs enforce parameterization coupling between the conditional prior model and the posterior model. We show that this coupling amounts to prior misspecification and revisit the NP objective to address this issue. More specifically, we propose Rényi Neural Processes (RNP), a method that replaces the standard KL divergence with the Rényi divergence, dampening the effects of the misspecified prior during posterior updates. We validate our approach across multiple benchmarks including regression and image inpainting tasks, and show significant performance improvements of RNPs in real-world problems. Our extensive experiments show consistently better log-likelihoods over state-of-the-art NP models.

Chat is not available.