검색 상세

A Study on Enhancing Text-to-Image Diffusion Models with Temporal Adaptive Attention Map Guidance

초록/요약

Text-to-image generation aims to create visually compelling images aligned with input prompts, but challenges such as subject misalignment, and neglect, often caused by semantic leakage during generation process, remain, particularly in multi-subject scenarios. To mitigate this, existing methods optimize attention maps in diffusion models using static loss functions at each timestep, often leading to suboptimal results due to insufficient consideration of varying characteristics across diffusion stages. To address this problem, we propose a novel framework that adaptively guides the attention maps by dividing the diffusion process into four intervals: initial, layout, shape, and texture. We adaptively optimize attention maps using interval-specific strategies and a dynamic loss function. Additionally, we introduce a seed filtering method based on the self-attention map analysis to detect and address the semantic leakage by restarting the generation process with new noise seed when necessary. Extensive experiments on various datasets demonstrate that our method achieves significant improvements in generating images aligned with input prompts, outperforming previous approaches both quantitatively and qualitatively.

more

목차

1 Introduction 1
2 Related Works 6
3 Proposed Method 9
3.1 Preliminary 9
3.1.1 Stable Diffusion Model 9
3.1.2 Attention layer 10
3.2 Proposed Method 11
3.2.1 Layout interval (T, Tlayout) 12
3.2.2 Shape interval [Tlayout, Tshape) 13
3.2.3 Seed filtering 14
3.2.4 Texture interval, [Tshape, 0] 15
4 Experiments and Results 21
4.1 Experiments 21
4.1.1 Experimental settings 21
4.1.2 Quantitative comparison 23
4.1.3 Qualitative comparison 26
4.1.4 Ablation study 29
5 Conclusions 38

more