Skip to main content

Research Repository

Advanced Search

LLM-guided instance-level image manipulation with diffusion u-net cross-attention maps

Palaev, Andrey; Khan, Adil; Kazmi, Ahsan

LLM-guided instance-level image manipulation with diffusion u-net cross-attention maps Thumbnail


Authors

Andrey Palaev

Adil Khan

Profile image of Ahsan Kazmi

Ahsan Kazmi Ahsan.Kazmi@uwe.ac.uk
Senior Lecturer in Data Science



Abstract

The advancement of text-to-image synthesis has introduced powerful generative models capable of creating realistic images from textual prompts. However, precise control over image attributes remains challenging, especially at the instance level. While existing methods offer some control through fine-tuning or auxiliary information, they often face limitations in flexibility and accuracy. To address these challenges, we propose a pipeline leveraging Large Language Models (LLMs), open-vocabulary detectors and cross-attention maps and intermediate activations of diffusion U-Net for instance-level image manipulation. Our method detects objects mentioned in the prompt and present in the generated image, enabling precise manipulation without extensive training or input masks. By incorporating cross-attention maps, our approach ensures coherence in manipulated images while controlling object positions. Our approach enables precise manipulations at the instance level without fine-tuning or auxiliary information such as masks or bounding boxes.

Presentation Conference Type Conference Paper (unpublished)
Conference Name British Machine Vision Conference
Start Date Nov 25, 2024
End Date Nov 28, 2024
Acceptance Date Jul 20, 2024
Deposit Date Oct 3, 2024
Publicly Available Date Oct 3, 2024
Peer Reviewed Peer Reviewed
Public URL https://uwe-repository.worktribe.com/output/13263543

Files





You might also like



Downloadable Citations