Advertisement

IMAGDressing

I have shared many virtual try-on projects before, many of which are from Alibaba.

Today, let's take a look at IMAGDressing, a virtual try-on project from Tencent and Huawei, which was released last week. The code and model are available on GitHub: https://github.com/muzishen/IMAGDressing

Scenarios

The differences in tasks based on conditions and applicable scenarios:

  • focuses on how users can try on clothes in a virtual environment, achieving realistic try-on effects through localized clothing refinement to enhance the online shopping experience for consumers.
  • emphasizes generating freely editable human images with fixed clothing and optional conditions (such as faces, poses, scenes), suitable for merchants' various needs to display clothing.

Methodology

IMAGDressing-v1 Framework Diagram

This framework mainly consists of a trainable clothing UNet and a frozen denoising UNet. The former extracts fine-grained clothing features, while the latter balances these features with text prompts. IMAGDressing-v1 is compatible with other community modules such as ControlNet and IP-Adapter.

Comparison

  • : Quantitative comparison of IMAGDressing-v1 with several state-of-the-art methods.
  • : Qualitative comparison of IMAGDressing-v1 with other state-of-the-art methods (including BLIP-Diffusion, Versatile Diffusion, IP-Adapter, and MagicClothing) under non-specific and specific conditions.