Ruixiang Jiang, Changwen Chen
The Hong Kong Polytechnic University
Diffusion models entangle content and style generation during the denoising process, leading to undesired content modification when directly applied to stylization tasks. Existing methods struggle to effectively control the diffusion model to meet the aesthetic-level requirements for stylization. In this paper, we introduce \textbf{Artist}, a training-free approach that aesthetically controls the content and style generation of a pretrained diffusion model for text-driven stylization. Our key insight is to disentangle the denoising of content and style into separate diffusion processes while sharing information between them. We propose simple yet effective content and style control methods that suppress style-irrelevant content generation, resulting in harmonious stylization results. Extensive experiments demonstrate that our method excels at achieving aesthetic-level stylization requirements, preserving intricate details in the content image and aligning well with the style prompt. Furthermore, we showcase the highly controllability of the stylization strength from various perspectives.
Please kindly cite our paper if you find this project helpful.
@article{jiang2024artist,
title={Artist: Aesthetically Controllable Text-Driven Stylization without Training},
author={Jiang, Ruixiang and Chen, Changwen},
journal={arXiv preprint arXiv:2407.15842},
year={2024}
}