FashionComposer : Compositional Fashion Image Generation

Sihui Ji1,2   Yiyang Wang1   Xi Chen1   Xiaogang Xu3   Hao Luo2,4   Hengshuang Zhao1  
1The University of Hong Kong   2DAMO Academy, Alibaba Group   3Zhejiang University   4Hupan Lab
pipeline

Demonstration for the applications of FashionComposer. FashionComposer takes different kinds of conditions (e.g., garment image, face image, parametric human model) equally as “assets” to composite diverse and realistic fashion images. Thus supporting various fashion-related applications like model image generation, virtual try-on, human album generation, etc.

Pose Customization

PartComp

Multi-garment Composition with Background

texture

Multi-garment Composition with Face

texture

Multi-garment Virtual Try-on

texture

Customization with Text Instructions

texture

Abstract

We present FashionComposer for compositional fashion image generation. Unlike previous methods, FashionComposer is highly flexible. It takes multi-modal input (i.e., text prompt, parametric human model, garment image, and face image) and supports personalizing the appearance, pose, and figure of the human and assigning multiple garments in one pass. To achieve this, we first develop a universal framework capable of handling diverse input modalities. We construct scaled training data to enhance the model’s robust compositional capabilities. To accommodate multiple reference images (garments and faces) seamlessly, we organize these references in a single image as an “asset library” and employ a reference UNet to extract appearance features. To inject the appearance features into the correct pixels in the generated result, we propose subject-binding attention. It binds the appearance features from different “assets” with the corresponding text features. In this way, the model could understand each asset according to their semantics, supporting arbitrary numbers and types of reference images. As a comprehensive solution, FashionComposer also supports many other applications like human album generation, diverse virtual try-on tasks, etc.

Overall Pipeline

Overall Pipeline

Overall pipeline of FashionComposer. FashionComposer takes garments composition and optional face, text prompt, and a densepose map projected from SMPL as inputs. The text prompt is encoded and fused with UNets through cross-attention and subjectbinding attention, while the garment features are extracted and injected for denoising through Feature Injection Attention.

BibTeX

@article{ji2024fashioncomposer,
      title={FashionComposer: Compositional Fashion Image Generation},
      author={Ji, Sihui and Wang, Yiyang and Chen, Xi and Xu, Xiaogang and Luo, Hao and Zhao, Hengshuang},
      journal={arXiv preprint arXiv:2412.14168},
      year={2024}
    }