Skip to yearly menu bar Skip to main content


Spotlight Poster

Normalizing Flows are Capable Generative Models

Shuangfei Zhai · Ruixiang Zhang · Preetum Nakkiran · David Berthelot · Jiatao Gu · Huangjie Zheng · Tianrong Chen · Miguel Angel Bautista Martin · Navdeep Jaitly · Joshua M Susskind

East Exhibition Hall A-B #E-2911
[ ] [ ]
Thu 17 Jul 4:30 p.m. PDT — 7 p.m. PDT
 
Oral presentation: Oral 6B Deep Learning Architectures
Thu 17 Jul 3:30 p.m. PDT — 4:30 p.m. PDT

Abstract:

Normalizing Flows (NFs) are likelihood-based models for continuous inputs. They have demonstrated promising results on both density estimation and generative modeling tasks, but have received relatively little attention in recent years. In this work, we demonstrate that NFs are more powerful than previously believed. We present TarFlow: a simple and scalable architecture that enables highly performant NF models. TarFlow can be thought of as a Transformer-based variant of Masked Autoregressive Flows (MAFs): it consists of a stack of autoregressive Transformer blocks on image patches, alternating the autoregression direction between layers. TarFlow is straightforward to train end-to-end, and capable of directly modeling and generating pixels. We also propose three key techniques to improve sample quality: Gaussian noise augmentation during training, a post training denoising procedure, and an effective guidance method for both class-conditional and unconditional settings. Putting these together, TarFlow sets new state-of-the-art results on likelihood estimation for images, beating the previous best methods by a large margin, and generates samples with quality and diversity comparable to diffusion models, for the first time with a stand-alone NF model. We make our code available at https://github.com/apple/ml-tarflow.

Lay Summary:

Normalizing flows are a classical unsupervised learning algorithm. They enjoy many unique properties, such as exact training loss, both encoding and decoding mode and free likelihood. However, they have been largely forgotten in the modern generative AI era. In this paper, we propose recipes to train normalizing flows and improve their performance to an unprecedented level: state of the art likelihood estimation, as well as image generation quality comparable to those of diffusion models. The way we achieve it is by proposing a carefully designed Transformer based architecture, as well as techniques such as Gaussian noise augmented training, score based desnoising and guidance. Our method is considerably simpler than existing designs and also enjoys stable and scalable training. We believe that this work proves that normalizing flows should be treated as a serious contender to other more popular methods such as Diffusion Models and discrete Autoregressive models. We also made our code https://github.com/apple/ml-tarflow available which provides a foundation for future explorations.

Chat is not available.