Skip to yearly menu bar Skip to main content


Poster
in
Affinity Workshop: New In ML

S$^2$Edit : Text-Guided Image Editing with Precise Semantic and Spatial Control

Xudong Liu · Zikun Chen · Ruowei Jiang · Ziyi Wu · Kejia Yin · Han Zhao · Parham Aarabi · Igor Gilitschenski

[ ] [ Project Page ]
 
presentation: New In ML
Tue 15 Jul 8 a.m. PDT — 5:30 p.m. PDT

Abstract: Recent advances in diffusion models have enabled high-quality generation and manipulation of images guided by texts, as well as concept learning from images. However, naive applications of existing methods to editing tasks that require fine-grained control, $\textit{e.g.}$, face editing, often lead to suboptimal solutions with identity information and high-frequency details lost during the editing process, or irrelevant image regions altered due to entangled concepts. In this work, we propose S$^2$Edit, a novel method based on a pre-trained text-to-image diffusion model that enables personalized editing with precise semantic and spatial control. We first fine-tune our model to embed the identity information into a learnable text token. During fine-tuning, we disentangle the learned identity token from attributes to be edited by enforcing an orthogonality constraint in the textual feature space. To ensure that the identity token only affects regions of interest, we apply object masks to guide the cross-attention maps. At inference time, our method performs localized editing while faithfully preserving the original identity with semantically disentangled and spatially focused identity token learned. Extensive experiments demonstrate the superiority of S$^2$Edit over state-of-the-art methods both quantitatively and qualitatively. Additionally, we showcase several compositional image editing applications of S$^2$Edit such as makeup transfer.

Chat is not available.