Skip to yearly menu bar Skip to main content


Poster

Joint Learning of Energy-based Models and their Partition Function

Michael Sander · Vincent Roulet · Tianlin Liu · Mathieu Blondel

West Exhibition Hall B2-B3 #W-513
[ ] [ ]
Thu 17 Jul 11 a.m. PDT — 1:30 p.m. PDT

Abstract:

Energy-based models (EBMs) offer a flexible framework for parameterizing probability distributions using neural networks.However, learning EBMs by exact maximum likelihood estimation (MLE) is generally intractable, due to the need to compute the partition function.In this paper, we propose a novel min-min formulation for approximately learning probabilistic EBMs in combinatorially-large discrete spaces, such as sets or permutations. Our key idea is to jointly learn both an energy model and its log-partition, parameterized as a neural network. Our approach not only provides a novel tractable objective criterion to learn EBMs by stochastic gradient descent (without relying on MCMC), but also a novel means to estimate the log-partition function on unseen data points.On the theoretical side, we show that our approach recovers the optimal MLE solution when optimizing in the space of continuous functions.Furthermore, we show that our approach naturally extends to the broader family of Fenchel-Young losses, allowing us to obtainthe first tractable method for optimizing the sparsemax loss in combinatorially-large spaces.We demonstrate our approach on multilabel classification and label ranking.

Lay Summary:

We propose a new method to learn probabilistic energy-based models.

Chat is not available.