Convert Figma logo to code with AI

williamyang1991 logoDualStyleGAN

[CVPR 2022] Pastiche Master: Exemplar-Based High-Resolution Portrait Style Transfer

1,663
257
1,663
29

Top Related Projects

11,103

StyleGAN2 - Official TensorFlow Implementation

Official PyTorch implementation of StyleGAN3

Simplest working implementation of Stylegan2, state of the art generative adversarial network, in Pytorch. Enabling everyone to experience disentanglement

StarGAN v2 - Official PyTorch Implementation (CVPR 2020)

Quick Overview

DualStyleGAN is a GitHub project that extends StyleGAN2 to enable dual-domain image synthesis and editing. It allows for the transfer of artistic styles to portrait images while preserving the original identity and structure. The project aims to provide a flexible and powerful tool for creative image manipulation and style transfer.

Pros

  • Enables high-quality artistic style transfer for portrait images
  • Preserves the identity and structure of the original image
  • Offers a wide range of pre-trained models for various artistic styles
  • Provides both image synthesis and editing capabilities

Cons

  • Requires significant computational resources for training and inference
  • Limited to portrait images and specific artistic styles
  • May produce artifacts or unrealistic results in some cases
  • Requires some technical knowledge to set up and use effectively

Code Examples

  1. Loading a pre-trained model:
from model import DualStyleGAN
model = DualStyleGAN('pretrained_models/cartoon')
  1. Performing style transfer:
import torch
from utils import tensor2im

content_image = torch.load('path/to/content_image.pt')
style_image = torch.load('path/to/style_image.pt')
result = model.transfer(content_image, style_image)
result_image = tensor2im(result)
  1. Editing a synthesized image:
latent = torch.randn(1, 512)
edited_latent = model.edit(latent, 'smile', strength=0.5)
edited_image = model.synthesis(edited_latent)

Getting Started

  1. Clone the repository:

    git clone https://github.com/williamyang1991/DualStyleGAN.git
    cd DualStyleGAN
    
  2. Install dependencies:

    pip install -r requirements.txt
    
  3. Download pre-trained models:

    python download_models.py
    
  4. Run the demo:

    from model import DualStyleGAN
    from utils import load_image, tensor2im
    import matplotlib.pyplot as plt
    
    model = DualStyleGAN('pretrained_models/cartoon')
    content = load_image('samples/content/001.jpg')
    style = load_image('samples/style/cartoon/000.jpg')
    
    result = model.transfer(content, style)
    plt.imshow(tensor2im(result))
    plt.show()
    

Competitor Comparisons

11,103

StyleGAN2 - Official TensorFlow Implementation

Pros of StyleGAN2

  • More established and widely adopted in the research community
  • Extensive documentation and resources available
  • Highly optimized for generating high-quality images

Cons of StyleGAN2

  • Limited to single-domain image generation
  • Less flexibility in style manipulation compared to DualStyleGAN

Code Comparison

StyleGAN2:

G = Generator(z_dim, w_dim, num_ws, img_resolution, img_channels)
z = torch.randn([batch_size, G.z_dim])
w = G.mapping(z, c)
img = G.synthesis(w)

DualStyleGAN:

G = DualStyleGAN(z_dim, w_dim, num_ws, img_resolution, img_channels)
z = torch.randn([batch_size, G.z_dim])
w_content, w_style = G.mapping(z, c_content, c_style)
img = G.synthesis(w_content, w_style)

The main difference lies in the mapping and synthesis steps. DualStyleGAN separates content and style, allowing for more flexible style manipulation across different domains.

Official PyTorch implementation of StyleGAN3

Pros of StyleGAN3

  • Improved image quality and reduced artifacts compared to previous versions
  • Better performance and faster training times
  • More flexible architecture allowing for various image resolutions

Cons of StyleGAN3

  • Requires more computational resources for training
  • Less focus on style transfer capabilities
  • May be more complex to implement for beginners

Code Comparison

DualStyleGAN:

def forward(self, styles, return_latents=False, inject_index=None, truncation=1, truncation_latent=None, input_is_latent=False, noise=None, randomize_noise=True):
    if not input_is_latent:
        styles = [self.style(s) for s in styles]

StyleGAN3:

def forward(self, z, c, truncation_psi=1, truncation_cutoff=None, update_emas=False):
    ws = self.mapping(z, c, truncation_psi=truncation_psi, truncation_cutoff=truncation_cutoff, update_emas=update_emas)
    img = self.synthesis(ws, update_emas=update_emas)
    return img

The code comparison shows that StyleGAN3 has a more streamlined forward pass, focusing on mapping and synthesis, while DualStyleGAN includes more options for style manipulation and noise injection.

Simplest working implementation of Stylegan2, state of the art generative adversarial network, in Pytorch. Enabling everyone to experience disentanglement

Pros of stylegan2-pytorch

  • More comprehensive implementation of StyleGAN2, including features like progressive growing and mixing regularization
  • Better documentation and code organization, making it easier for users to understand and modify
  • Includes additional utilities for training and evaluation, such as FID calculation and model export

Cons of stylegan2-pytorch

  • Lacks the dual-domain style transfer capabilities of DualStyleGAN
  • May require more computational resources due to its more complete implementation
  • Does not focus on specific applications like portrait stylization

Code Comparison

DualStyleGAN:

def forward(self, x, styles, return_latents=False):
    styles = [self.style(s) for s in styles]
    x = self.conv1(x, styles[0])
    return x

stylegan2-pytorch:

def forward(self, styles, return_latents = False, inject_index = None, truncation = 1, truncation_latent = None, input_is_latent = False, noise = None):
    if not input_is_latent:
        styles = [self.style(s) for s in styles]
    x = self.input(styles[0])
    return x

Both repositories implement StyleGAN2-based architectures, but DualStyleGAN focuses on dual-domain style transfer for portrait stylization, while stylegan2-pytorch provides a more general-purpose implementation with additional features and utilities.

StarGAN v2 - Official PyTorch Implementation (CVPR 2020)

Pros of StarGAN v2

  • Supports multi-domain image-to-image translation, allowing for more versatile style transfers
  • Utilizes adaptive layer instance normalization for better style-content disentanglement
  • Provides a more extensive dataset (CelebA-HQ) for training and evaluation

Cons of StarGAN v2

  • Focuses primarily on facial attributes, limiting its application to other image types
  • May require more computational resources due to its multi-domain architecture

Code Comparison

StarGAN v2:

def forward(self, x, s, masks=None):
    return self.decode(self.encode(x), s, masks=masks)

DualStyleGAN:

def forward(self, x, styles, return_latents=False):
    styles = [s.squeeze() for s in styles]
    latent = self.style(torch.cat(styles, 1))
    return self.generator([latent], input_is_latent=True, return_latents=return_latents)

StarGAN v2 uses a more straightforward encoder-decoder architecture, while DualStyleGAN employs a style-based generator with multiple input styles. DualStyleGAN's approach allows for more fine-grained control over the generated images by combining different style inputs.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

DualStyleGAN - Official PyTorch Implementation

This repository provides the official PyTorch implementation for the following paper:

Pastiche Master: Exemplar-Based High-Resolution Portrait Style Transfer
Shuai Yang, Liming Jiang, Ziwei Liu and Chen Change Loy
In CVPR 2022.
Project Page | Paper | Supplementary Video

Abstract: Recent studies on StyleGAN show high performance on artistic portrait generation by transfer learning with limited data. In this paper, we explore more challenging exemplar-based high-resolution portrait style transfer by introducing a novel DualStyleGAN with flexible control of dual styles of the original face domain and the extended artistic portrait domain. Different from StyleGAN, DualStyleGAN provides a natural way of style transfer by characterizing the content and style of a portrait with an intrinsic style path and a new extrinsic style path, respectively. The delicately designed extrinsic style path enables our model to modulate both the color and complex structural styles hierarchically to precisely pastiche the style example. Furthermore, a novel progressive fine-tuning scheme is introduced to smoothly transform the generative space of the model to the target domain, even with the above modifications on the network architecture. Experiments demonstrate the superiority of DualStyleGAN over state-of-the-art methods in high-quality portrait style transfer and flexible style control.

Features:
High-Resolution (1024) | Training Data-Efficient (~200 Images) | Exemplar-Based Color and Structure Transfer

Updates

  • [02/2023] Add --wplus in style_transfer.py to use original w+ pSp encoder rather than z+.
  • [09/2022] Pre-trained models in three new styles (feat. StableDiffusion) are released.
  • [07/2022] Source code license is updated.
  • [03/2022] Paper and supplementary video are released.
  • [03/2022] Web demo is created.
  • [03/2022] Code is released.
  • [03/2022] This website is created.

Web Demo

Integrated into Huggingface Spaces 🤗 using Gradio. Try out the Web Demo: Hugging Face Spaces or Hugging Face Spaces

Installation

Clone this repo:

git clone https://github.com/williamyang1991/DualStyleGAN.git
cd DualStyleGAN

Dependencies:

All dependencies for defining the environment are provided in environment/dualstylegan_env.yaml. We recommend running this repository using Anaconda:

conda env create -f ./environment/dualstylegan_env.yaml

We use CUDA 10.1 so it will install PyTorch 1.7.1 (corresponding to Line 22, Line 25, Line 26 of dualstylegan_env.yaml). Please install PyTorch that matches your own CUDA version following https://pytorch.org/.

☞ Install on Windows: here and here

(1) Dataset Preparation

Cartoon, Caricature and Anime datasets can be downloaded from their official pages. We also provide the script to build new datasets.

DatasetDescription
Cartoon317 cartoon face images from Toonify.
Caricature199 images from WebCaricature. Please refer to dataset preparation for more details.
Anime174 images from Danbooru Portraits. Please refer to dataset preparation for more details.
Fantasy137 fantasy face images generated by StableDiffusion.
Illustration156 illustration face images generated by StableDiffusion.
Impasto120 impasto face images generated by StableDiffusion.
Other stylesPlease refer to dataset preparation for the way of building new datasets.

(2) Inference for Style Transfer and Artistic Portrait Generation

Inference Notebook


To help users get started, we provide a Jupyter notebook found in ./notebooks/inference_playground.ipynb that allows one to visualize the performance of DualStyleGAN. The notebook will download the necessary pretrained models and run inference on the images found in ./data/.

If no GPU is available, you may refer to Inference on CPU, and set device = 'cpu' in the notebook.

Pretrained Models

Pretrained models can be downloaded from Google Drive or Baidu Cloud (access code: cvpr):

ModelDescription
encoderPixel2style2pixel encoder that embeds FFHQ images into StyleGAN2 Z+ latent code
encoder_wplusOriginal Pixel2style2pixel encoder that embeds FFHQ images into StyleGAN2 W+ latent code
cartoonDualStyleGAN and sampling models trained on Cartoon dataset, 317 (refined) extrinsic style codes
caricatureDualStyleGAN and sampling models trained on Caricature dataset, 199 (refined) extrinsic style codes
animeDualStyleGAN and sampling models trained on Anime dataset, 174 (refined) extrinsic style codes
arcaneDualStyleGAN and sampling models trained on Arcane dataset, 100 extrinsic style codes
comicDualStyleGAN and sampling models trained on Comic dataset, 101 extrinsic style codes
pixarDualStyleGAN and sampling models trained on Pixar dataset, 122 extrinsic style codes
slamdunkDualStyleGAN and sampling models trained on Slamdunk dataset, 120 extrinsic style codes
fantasyDualStyleGAN models trained on Fantasy dataset, 137 extrinsic style codes
illustrationDualStyleGAN models trained on Illustration dataset, 156 extrinsic style codes
impastoDualStyleGAN models trained on Impasto dataset, 120 extrinsic style codes

The saved checkpoints are under the following folder structure:

checkpoint
|--encoder.pt                     % Pixel2style2pixel model
|--encoder_wplus.pt               % Pixel2style2pixel model (optional)
|--cartoon
    |--generator.pt               % DualStyleGAN model
    |--sampler.pt                 % The extrinsic style code sampling model
    |--exstyle_code.npy           % extrinsic style codes of Cartoon dataset
    |--refined_exstyle_code.npy   % refined extrinsic style codes of Cartoon dataset
|--caricature
    % the same files as in Cartoon
...

Exemplar-Based Style Transfer

Transfer the style of a default Cartoon image onto a default face:

python style_transfer.py 

The result cartoon_transfer_53_081680.jpg is saved in the folder .\output\, where 53 is the id of the style image in the Cartoon dataset, 081680 is the name of the content face image. An corresponding overview image cartoon_transfer_53_081680_overview.jpg is additionally saved to illustrate the input content image, the encoded content image, the style image (* the style image will be shown only if it is in your folder), and the result:

Specify the style image with --style and --style_id (find the mapping between id and filename here, find the visual mapping between id and the style image here). Specify the filename of the saved images with --name. Specify the weight to adjust the degree of style with --weight. The following script generates the style transfer results in the teaser of the paper.

python style_transfer.py
python style_transfer.py --style cartoon --style_id 10
python style_transfer.py --style caricature --name caricature_transfer --style_id 0 --weight 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
python style_transfer.py --style caricature --name caricature_transfer --style_id 187 --weight 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
python style_transfer.py --style anime --name anime_transfer --style_id 17 --weight 0 0 0 0 0.75 0.75 0.75 1 1 1 1 1 1 1 1 1 1 1
python style_transfer.py --style anime --name anime_transfer --style_id 48 --weight 0 0 0 0 0.75 0.75 0.75 1 1 1 1 1 1 1 1 1 1 1

Specify the content image with --content. If the content image is not well aligned with FFHQ, use --align_face. For preserving the color style of the content image, use --preserve_color or set the last 11 elements of --weight to all zeros.

python style_transfer.py --content ./data/content/unsplash-rDEOVtE7vOs.jpg --align_face --preserve_color \
       --style arcane --name arcane_transfer --style_id 13 \
       --weight 0.6 0.6 0.6 0.6 0.6 0.6 0.6 0.6 0.6 0.6 0.6 1 1 1 1 1 1 1 

→

Specify --wplus to use the original pSp encoder to extract the W+ intrinsic style code, which may better preserve the face features of the content image.

Remarks: Our trained pSp encoder on Z+/W+ space cannot perfectly encode the content image. If the style transfer result more consistent with the content image is desired, one may use latent optimization to better fit the content image or using other StyleGAN encoders (as discussed in https://github.com/williamyang1991/DualStyleGAN/issues/11 and https://github.com/williamyang1991/DualStyleGAN/issues/29).

More options can be found via python style_transfer.py -h.

Artistic Portrait Generation

Generate random Cartoon face images (Results are saved in the ./output/ folder):

python generate.py 

Specify the style type with --style and the filename of the saved images with --name:

python generate.py --style arcane --name arcane_generate

Specify the weight to adjust the degree of style with --weight.

Keep the intrinsic style code, extrinsic color code or extrinsic structure code fixed using --fix_content, --fix_color and --fix_structure, respectively.

python generate.py --style caricature --name caricature_generate --weight 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 --fix_content

More options can be found via python generate.py -h.


(3) Training DualStyleGAN

Download the supporting models to the ./checkpoint/ folder:

ModelDescription
stylegan2-ffhq-config-f.ptStyleGAN model trained on FFHQ taken from rosinality.
model_ir_se50.pthPretrained IR-SE50 model taken from TreB1eN for ID loss.

Facial Destylization

Step 1: Prepare data. Prepare the dataset in ./data/DATASET_NAME/images/train/. First create lmdb datasets:

python ./model/stylegan/prepare_data.py --out LMDB_PATH --n_worker N_WORKER --size SIZE1,SIZE2,SIZE3,... DATASET_PATH

For example, download 317 Cartoon images into ./data/cartoon/images/train/ and run

python ./model/stylegan/prepare_data.py --out ./data/cartoon/lmdb/ --n_worker 4 --size 1024 ./data/cartoon/images/

Step 2: Fine-tune StyleGAN. Fine-tune StyleGAN in distributed settings:

python -m torch.distributed.launch --nproc_per_node=N_GPU --master_port=PORT finetune_stylegan.py --batch BATCH_SIZE \
       --ckpt FFHQ_MODEL_PATH --iter ITERATIONS --style DATASET_NAME --augment LMDB_PATH

Take the cartoon dataset for example, run (batch size of 8*4=32 is recommended)

python -m torch.distributed.launch --nproc_per_node=8 --master_port=8765 finetune_stylegan.py --iter 600 --batch 4 --ckpt ./checkpoint/stylegan2-ffhq-config-f.pt --style cartoon --augment ./data/cartoon/lmdb/

The fine-tuned model can be found in ./checkpoint/cartoon/finetune-000600.pt. Intermediate results are saved in ./log/cartoon/.

Step 3: Destylize artistic portraits.

python destylize.py --model_name FINETUNED_MODEL_NAME --batch BATCH_SIZE --iter ITERATIONS DATASET_NAME

Take the cartoon dataset for example, run:

python destylize.py --model_name finetune-000600.pt --batch 1 --iter 300 cartoon

The intrinsic and extrinsic style codes are saved in ./checkpoint/cartoon/instyle_code.npy and ./checkpoint/cartoon/exstyle_code.npy, respectively. Intermediate results are saved in ./log/cartoon/destylization/. To speed up destylization, set --batch to large value like 16. For styles severely different from real faces, set --truncation to small value like 0.5 to make the results more photo-realistic (it enables DualStyleGAN to learn larger structrue deformations).

Progressive Fine-Tuning

Stage 1 & 2: Pretrain DualStyleGAN on FFHQ. We provide our pretrained model generator-pretrain.pt at Google Drive or Baidu Cloud (access code: cvpr). This model is obtained by:

python -m torch.distributed.launch --nproc_per_node=1 --master_port=8765 pretrain_dualstylegan.py --iter 3000 --batch 4 ./data/ffhq/lmdb/

where ./data/ffhq/lmdb/ contains the lmdb data created from the FFHQ dataset via ./model/stylegan/prepare_data.py.

Stage 3: Fine-Tune DualStyleGAN on Target Domain. Fine-tune DualStyleGAN in distributed settings:

python -m torch.distributed.launch --nproc_per_node=N_GPU --master_port=PORT finetune_dualstylegan.py --iter ITERATIONS \ 
                          --batch BATCH_SIZE --ckpt PRETRAINED_MODEL_PATH --augment DATASET_NAME

The loss term weights can be specified by --style_loss (λFM), --CX_loss (λCX), --perc_loss (λperc), --id_loss (λID) and --L2_reg_loss (λreg). λID and λreg are suggested to be tuned for each style dataset to achieve ideal performance. More options can be found via python finetune_dualstylegan.py -h.

Take the Cartoon dataset as an example, run (multi-GPU enables a large batch size of 8*4=32 for better performance):

python -m torch.distributed.launch --nproc_per_node=8 --master_port=8765 finetune_dualstylegan.py --iter 1500 --batch 4 --ckpt ./checkpoint/generator-pretrain.pt --style_loss 0.25 --CX_loss 0.25 --perc_loss 1 --id_loss 1 --L2_reg_loss 0.015 --augment cartoon

The fine-tuned models can be found in ./checkpoint/cartoon/generator-ITER.pt where ITER = 001000, 001100, ..., 001500. Intermediate results are saved in ./log/cartoon/. Large ITER has strong cartoon styles but at the cost of artifacts, and users may select the most balanced one from 1000-1500. We use 1400 for our paper experiments.

(optional) Latent Optimization and Sampling

Refine extrinsic style code. Refine the color and structure styles to better fit the example style images.

python refine_exstyle.py --lr_color COLOR_LEARNING_RATE --lr_structure STRUCTURE_LEARNING_RATE DATASET_NAME

By default, the code will load instyle_code.npy, exstyle_code.npy, and generator.pt in ./checkpoint/DATASET_NAME/. Use --instyle_path, --exstyle_path, --ckpt to specify other saved style codes or models. Take the Cartoon dataset as an example, run:

python refine_exstyle.py --lr_color 0.1 --lr_structure 0.005 --ckpt ./checkpoint/cartoon/generator-001400.pt cartoon

The refined extrinsic style codes are saved in ./checkpoint/DATASET_NAME/refined_exstyle_code.npy. lr_color and lr_structure are suggested to be tuned to better fit the example styles.

Training sampling network. Train a sampling network to map unit Gaussian noises to the distribution of extrinsic style codes:

python train_sampler.py DATASET_NAME

By default, the code will load refined_exstyle_code.npy or exstyle_code.npy in ./checkpoint/DATASET_NAME/. Use --exstyle_path to specify other saved extrinsic style codes. The saved model can be found in ./checkpoint/DATASET_NAME/sampler.pt.


(4) Results

Exemplar-based cartoon style trasnfer

https://user-images.githubusercontent.com/18130694/158047991-77c31137-c077-415e-bae2-865ed3ec021f.mp4

Exemplar-based caricature style trasnfer

https://user-images.githubusercontent.com/18130694/158048107-7b0aa439-5e3a-45a9-be0e-91ded50e9136.mp4

Exemplar-based anime style trasnfer

https://user-images.githubusercontent.com/18130694/158048114-237b8b81-eff3-4033-89f4-6e8a7bbf67f7.mp4

Other styles

Combine DualStyleGAN with State-of-the-Art Diffusion model

We use StableDiffusion to generate face images of the specified style of famous artists. Trained with these images, DualStyleGAN is able to pastiche these famous artists and generates appealing results.

Citation

If you find this work useful for your research, please consider citing our paper:

@inproceedings{yang2022Pastiche,
  title={Pastiche Master: Exemplar-Based High-Resolution Portrait Style Transfer},
  author={Yang, Shuai and Jiang, Liming and Liu, Ziwei and Loy, Chen Change},
  booktitle={CVPR},
  year={2022}
}

Acknowledgments

The code is mainly developed based on stylegan2-pytorch and pixel2style2pixel.