Spotlight Poster
New Bounds for Sparse Variational Gaussian Processes
Michalis Titsias
East Exhibition Hall A-B #E-1301
Modeling uncertainty is one of the key challenges in Machine Learning. For regression and function approximation problems, Gaussian processes (GPs) provide a Bayesian nonparametric (i.e., memory-based) framework to estimate unknown functions by providing also uncertainty estimates. However, the complexity of these models scales cubically with the number of training examples, so for large datasets exact computations are prohibitive. In this work we elaborate on scalable GP methods that construct approximations based on smaller sets of special points called inducing points. More precisely, we improve a certain type of scalable GP method based on a posterior (variational) approximation of the model.