Skip to yearly menu bar Skip to main content


Poster
in
Workshop: 2nd Generative AI for Biology Workshop

Do we need equivariant models for molecule generation?

Ewa M. Nowara · Joshua Rackers · Patricia Suriana · Pan Kessel · Max Shen · Andrew Watkins · Michael Maser

Keywords: [ Generative Models ] [ Drug Discovery ] [ Equivariant Models ] [ Molecules ] [ Voxel Structures ] [ 3D Generation ] [ Machine Learning ]


Abstract:

Deep generative models are increasingly used for molecular discovery, with most recent approaches relying on equivariant graph neural networks (GNNs) under the assumption that explicit equivariance is essential for generating high-quality 3D molecules. However, these models are complex, difficult to train, and scale poorly. We investigate whether non-equivariant convolutional neural networks (CNNs) trained with rotation augmentations can learn equivariance and match the performance of equivariant models. We derive a loss decomposition that separates prediction error from equivariance error, and evaluate how model size, dataset size, and training duration affect performance across denoising, molecule generation, and property prediction. To our knowledge, this is the first study to analyze learned equivariance in generative tasks.

Chat is not available.