Skip to yearly menu bar Skip to main content


Spotlight Poster

ConceptAttention: Diffusion Transformers Learn Highly Interpretable Features

Alec Helbling · Tuna Han Salih Meral · Benjamin Hoover · Pinar Yanardag · Polo Chau

East Exhibition Hall A-B #E-3001
[ ] [ ]
Tue 15 Jul 4:30 p.m. PDT — 7 p.m. PDT
 
Oral presentation: Oral 2D Efficient ML
Tue 15 Jul 3:30 p.m. PDT — 4:30 p.m. PDT

Abstract:

Do the rich representations of multi-modal diffusion transformers (DiTs) exhibit unique properties that enhance their interpretability? We introduce ConceptAttention, a novel method that leverages the expressive power of DiT attention layers to generate high-quality saliency maps that precisely locate textual concepts within images. Without requiring additional training, ConceptAttention repurposes the parameters of DiT attention layers to produce highly contextualized concept embeddings, contributing the major discovery that performing linear projections in the output space of DiT attention layers yields significantly sharper saliency maps compared to commonly used cross-attention maps. ConceptAttention even achieves state-of-the-art performance on zero-shot image segmentation benchmarks, outperforming 15 other zero-shot interpretability methods on the ImageNet-Segmentation dataset. ConceptAttention works for popular image models and even seamlessly generalizes to video generation. Our work contributes the first evidence that the representations of multi-modal DiTs are highly transferable to vision tasks like segmentation.

Lay Summary:

Recent AI models are capable of generating high quality images from text descriptions. However, it is difficult to understand the internals of these models. Our approach, called ConceptAttention, explains the inner workings of these models by creating a set of heat maps for simple text concepts like "cat" or "sky". These heat maps highlight the locations in the image where these concepts are present. Our approach gives insight into how a model "sees" the image that it is generating, improving the transparency of these models. Our model outperforms a variety of other existing methods at isolating the locations of textual concepts, and requires no additional training. Remarkably, despite the fact that we designed ConceptAttention to work for image generation models, we found that it works to video generation models too. Not only does our method improve the interpretability and transparency of these powerful machine learning models, it also can be applied to different applications like image editing and segmentation.

Chat is not available.