Skip to yearly menu bar Skip to main content


Poster
in
Workshop: 3rd Workshop on High-dimensional Learning Dynamics (HiLD)

Neural Collapse is Globally Optimal in Deep Regularized ResNets and Transformers

Peter Súkeník · Christoph Lampert · Marco Mondelli


Abstract:

The empirical emergence of neural collapse---a surprising symmetry in the feature representations of the training data in the penultimate layer of deep neural networks---has spurred a line of theoretical research aimed at its understanding. However, existing work either focuses on data-agnostic models or it remains limited to multi-layer perceptrons. We fill both these gaps by analyzing modern architectures in a data-aware regime: we prove that global optima of deep regularized transformers and residual networks (ResNets) with LayerNorm trained with cross entropy or mean squared error loss are approximately collapsed, and the approximation gets tighter as the depth grows. More generally, we formally reduce any end-to-end large-depth ResNet or transformer training into an equivalent unconstrained features model, thus justifying its wide use in the literature even beyond data-agnostic settings. Our theoretical results are supported by experiments on computer vision and language datasets showing that, as the depth grows, neural collapse indeed becomes more prominent.

Chat is not available.