Convert Figma logo to code with AI

NVlabs logostylegan2

StyleGAN2 - Official TensorFlow Implementation

10,963
2,535
10,963
26

Top Related Projects

10,782

PyTorch package for the discrete VAE used for DALL·E.

A latent text-to-image diffusion model

Simplest working implementation of Stylegan2, state of the art generative adversarial network, in Pytorch. Enabling everyone to experience disentanglement

Image-to-Image Translation in PyTorch

Detectron2 is a platform for object detection, segmentation and other visual recognition tasks.

Quick Overview

StyleGAN2 is an improved version of the StyleGAN architecture for generative adversarial networks (GANs). Developed by NVIDIA researchers, it addresses issues like "blob" artifacts and improves image quality, making it one of the most advanced models for high-resolution image synthesis.

Pros

  • Produces extremely high-quality, photorealistic images
  • Offers better control over image features and styles
  • Improves upon previous GAN architectures, reducing artifacts
  • Provides a powerful tool for various creative and research applications

Cons

  • Requires significant computational resources for training
  • Complex architecture may be challenging for beginners to understand and implement
  • Limited to image generation tasks
  • Potential ethical concerns regarding deepfakes and synthetic media

Code Examples

# Load a pre-trained StyleGAN2 model
import dnnlib
import dnnlib.tflib as tflib
import pickle

# Initialize TensorFlow
tflib.init_tf()

# Load the pre-trained model
with open("stylegan2-ffhq-config-f.pkl", "rb") as f:
    _, _, Gs = pickle.load(f)
# Generate a random image
import numpy as np

# Generate latent vector
latent_vector = np.random.randn(1, Gs.input_shape[1])

# Generate image
image = Gs.run(latent_vector, None, truncation_psi=0.7, randomize_noise=True, output_transform=dict(func=tflib.convert_images_to_uint8, nchw_to_nhwc=True))
# Style mixing example
import PIL.Image

# Generate two latent vectors
latent_vector1 = np.random.randn(1, Gs.input_shape[1])
latent_vector2 = np.random.randn(1, Gs.input_shape[1])

# Mix styles
mixed_latents = np.vstack([latent_vector1] * 7 + [latent_vector2] * 11)
mixed_image = Gs.run(mixed_latents, None, truncation_psi=0.7, randomize_noise=True, output_transform=dict(func=tflib.convert_images_to_uint8, nchw_to_nhwc=True))

# Save the mixed image
PIL.Image.fromarray(mixed_image[0], 'RGB').save('mixed_image.png')

Getting Started

  1. Clone the repository:

    git clone https://github.com/NVlabs/stylegan2.git
    cd stylegan2
    
  2. Install dependencies:

    pip install numpy scipy tensorflow-gpu pillow requests
    
  3. Download pre-trained models:

    python download_pretrained_models.py
    
  4. Generate images:

    import pretrained_networks
    import numpy as np
    import dnnlib
    import dnnlib.tflib as tflib
    import PIL.Image
    
    network_pkl = "gdrive:networks/stylegan2-ffhq-config-f.pkl"
    _G, _D, Gs = pretrained_networks.load_networks(network_pkl)
    
    latents = np.random.randn(1, Gs.input_shape[1])
    images = Gs.run(latents, None, truncation_psi=0.7, randomize_noise=True, output_transform=dict(func=tflib.convert_images_to_uint8, nchw_to_nhwc=True))
    PIL.Image.fromarray(images[0], 'RGB').save('example.png')
    

Competitor Comparisons

10,782

PyTorch package for the discrete VAE used for DALL·E.

Pros of DALL-E

  • Generates images from text descriptions, offering more versatile and controllable output
  • Supports a wider range of image generation tasks, including complex scenes and concepts
  • Utilizes a more advanced transformer-based architecture for improved image quality

Cons of DALL-E

  • Requires significantly more computational resources for training and inference
  • Less focus on high-resolution image generation compared to StyleGAN2
  • Limited public access and documentation due to its closed-source nature

Code Comparison

StyleGAN2:

import dnnlib
import dnnlib.tflib as tflib
import pickle

with open('network-snapshot-000000.pkl', 'rb') as f:
    _G, _D, Gs = pickle.load(f)

DALL-E:

import clip
import dall_e

model = dall_e.load_model("mega")
text = clip.tokenize(["a painting of a cat"])
image = model.generate(text)

Key Differences

  • StyleGAN2 focuses on generating high-quality images from latent vectors
  • DALL-E generates images based on text descriptions
  • StyleGAN2 is open-source and well-documented, while DALL-E has limited public access
  • DALL-E offers more versatility in image generation tasks
  • StyleGAN2 excels in producing photorealistic images at high resolutions

A latent text-to-image diffusion model

Pros of Stable-Diffusion

  • More versatile, capable of generating images from text prompts
  • Faster inference time, especially for larger image sizes
  • Supports a wider range of creative applications and use cases

Cons of Stable-Diffusion

  • Generally lower image quality and less photorealism than StyleGAN2
  • Requires more complex prompts and fine-tuning for optimal results
  • Less control over specific image attributes compared to StyleGAN2

Code Comparison

StyleGAN2:

import dnnlib
import legacy

network_pkl = "https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada-pytorch/pretrained/ffhq.pkl"
device = torch.device('cuda')
with dnnlib.util.open_url(network_pkl) as f:
    G = legacy.load_network_pkl(f)['G_ema'].to(device)

Stable-Diffusion:

from diffusers import StableDiffusionPipeline

model_id = "CompVis/stable-diffusion-v1-4"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")

Both repositories focus on image generation, but Stable-Diffusion offers more flexibility in terms of input and output. StyleGAN2 excels in generating high-quality, photorealistic images within its trained domain, while Stable-Diffusion can create a wider variety of images based on text prompts. The code snippets demonstrate the difference in setup and usage between the two models.

Simplest working implementation of Stylegan2, state of the art generative adversarial network, in Pytorch. Enabling everyone to experience disentanglement

Pros of stylegan2-pytorch

  • Implemented in PyTorch, which is more widely used and accessible for many researchers
  • Easier to understand and modify due to cleaner, more modular code structure
  • Includes additional features like tiled generation and custom CUDA kernels

Cons of stylegan2-pytorch

  • May have slightly lower performance compared to the original TensorFlow implementation
  • Lacks some of the additional tools and analysis scripts provided in the original repo

Code Comparison

stylegan2 (TensorFlow):

def G_mapping(latents_in, labels_in, dlatent_broadcast=None, **kwargs):
    act = tf.get_default_graph().get_tensor_by_name('G_mapping/Dense0/act:0')
    with tf.control_dependencies([act]):
        dlatents = components.G_mapping(latents_in, labels_in, **kwargs)
    return dlatents

stylegan2-pytorch (PyTorch):

class StyleVectorizer(nn.Module):
    def __init__(self, emb, depth, lr_mul = 0.1):
        super().__init__()
        layers = []
        for i in range(depth):
            layers.extend([nn.Linear(emb, emb), leaky_relu()])
        self.net = nn.Sequential(*layers)

    def forward(self, x):
        return self.net(x)

Image-to-Image Translation in PyTorch

Pros of pytorch-CycleGAN-and-pix2pix

  • Supports multiple image-to-image translation tasks (CycleGAN, pix2pix, etc.)
  • Easier to use and modify for various applications
  • More extensive documentation and examples

Cons of pytorch-CycleGAN-and-pix2pix

  • Generally lower image quality compared to StyleGAN2
  • Less suitable for high-resolution image generation
  • Limited control over specific features in generated images

Code Comparison

StyleGAN2:

import dnnlib
import dnnlib.tflib as tflib
import pickle

with open('network-final.pkl', 'rb') as f:
    _G, _D, Gs = pickle.load(f)

pytorch-CycleGAN-and-pix2pix:

from models import create_model
from options.test_options import TestOptions

opt = TestOptions().parse()
model = create_model(opt)
model.setup(opt)

The StyleGAN2 code focuses on loading a pre-trained model, while pytorch-CycleGAN-and-pix2pix emphasizes model creation and setup. StyleGAN2 uses TensorFlow, whereas pytorch-CycleGAN-and-pix2pix is built on PyTorch, reflecting different underlying frameworks and approaches to model implementation.

Detectron2 is a platform for object detection, segmentation and other visual recognition tasks.

Pros of Detectron2

  • More versatile, supporting a wide range of computer vision tasks
  • Actively maintained with frequent updates and community support
  • Extensive documentation and tutorials for easier adoption

Cons of Detectron2

  • Steeper learning curve due to its broader scope
  • Potentially higher computational requirements for some tasks

Code Comparison

Detectron2 (object detection):

from detectron2.engine import DefaultPredictor
from detectron2.config import get_cfg

cfg = get_cfg()
cfg.merge_from_file(model_zoo.get_config_file("COCO-Detection/faster_rcnn_R_50_FPN_3x.yaml"))
predictor = DefaultPredictor(cfg)
outputs = predictor(image)

StyleGAN2 (image generation):

import dnnlib
import legacy

network_pkl = "https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada-pytorch/pretrained/ffhq.pkl"
with dnnlib.util.open_url(network_pkl) as f:
    G = legacy.load_network_pkl(f)['G_ema'].cuda()
z = torch.randn([1, G.z_dim]).cuda()
img = G(z, None)

Both repositories are powerful tools in their respective domains. Detectron2 excels in various computer vision tasks, while StyleGAN2 focuses on high-quality image generation. The choice between them depends on the specific project requirements and the desired output.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

StyleGAN2 — Official TensorFlow Implementation

Teaser image

Analyzing and Improving the Image Quality of StyleGAN
Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, Timo Aila

Paper: http://arxiv.org/abs/1912.04958
Video: https://youtu.be/c-NJtV9Jvp0

Abstract: The style-based GAN architecture (StyleGAN) yields state-of-the-art results in data-driven unconditional generative image modeling. We expose and analyze several of its characteristic artifacts, and propose changes in both model architecture and training methods to address them. In particular, we redesign generator normalization, revisit progressive growing, and regularize the generator to encourage good conditioning in the mapping from latent vectors to images. In addition to improving image quality, this path length regularizer yields the additional benefit that the generator becomes significantly easier to invert. This makes it possible to reliably detect if an image is generated by a particular network. We furthermore visualize how well the generator utilizes its output resolution, and identify a capacity problem, motivating us to train larger models for additional quality improvements. Overall, our improved model redefines the state of the art in unconditional image modeling, both in terms of existing distribution quality metrics as well as perceived image quality.

For business inquiries, please visit our website and submit the form: NVIDIA Research Licensing

★★★ NEW: StyleGAN2-ADA-PyTorch is now available; see the full list of versions here ★★★

Additional material 
StyleGAN2Main Google Drive folder
├  stylegan2-paper.pdfHigh-quality version of the paper
├  stylegan2-video.mp4High-quality version of the video
├  imagesExample images produced using our method
│  ├  curated-imagesHand-picked images showcasing our results
│  └  100k-generated-imagesRandom images with and without truncation
├  videosIndividual clips of the video as high-quality MP4
└  networksPre-trained networks
   ├  stylegan2-ffhq-config-f.pklStyleGAN2 for FFHQ dataset at 1024×1024
   ├  stylegan2-car-config-f.pklStyleGAN2 for LSUN Car dataset at 512×384
   ├  stylegan2-cat-config-f.pklStyleGAN2 for LSUN Cat dataset at 256×256
   ├  stylegan2-church-config-f.pklStyleGAN2 for LSUN Church dataset at 256×256
   ├  stylegan2-horse-config-f.pklStyleGAN2 for LSUN Horse dataset at 256×256
   └ ⋯Other training configurations used in the paper

Requirements

  • Both Linux and Windows are supported. Linux is recommended for performance and compatibility reasons.
  • 64-bit Python 3.6 installation. We recommend Anaconda3 with numpy 1.14.3 or newer.
  • We recommend TensorFlow 1.14, which we used for all experiments in the paper, but TensorFlow 1.15 is also supported on Linux. TensorFlow 2.x is not supported.
  • On Windows you need to use TensorFlow 1.14, as the standard 1.15 installation does not include necessary C++ headers.
  • One or more high-end NVIDIA GPUs, NVIDIA drivers, CUDA 10.0 toolkit and cuDNN 7.5. To reproduce the results reported in the paper, you need an NVIDIA GPU with at least 16 GB of DRAM.
  • Docker users: use the provided Dockerfile to build an image with the required library dependencies.

StyleGAN2 relies on custom TensorFlow ops that are compiled on the fly using NVCC. To test that your NVCC installation is working correctly, run:

nvcc test_nvcc.cu -o test_nvcc -run
| CPU says hello.
| GPU says hello.

On Windows, the compilation requires Microsoft Visual Studio to be in PATH. We recommend installing Visual Studio Community Edition and adding into PATH using "C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Auxiliary\Build\vcvars64.bat".

Using pre-trained networks

Pre-trained networks are stored as *.pkl files on the StyleGAN2 Google Drive folder. Below, you can either reference them directly using the syntax gdrive:networks/<filename>.pkl, or download them manually and reference by filename.

# Generate uncurated ffhq images (matches paper Figure 12)
python run_generator.py generate-images --network=gdrive:networks/stylegan2-ffhq-config-f.pkl \
  --seeds=6600-6625 --truncation-psi=0.5

# Generate curated ffhq images (matches paper Figure 11)
python run_generator.py generate-images --network=gdrive:networks/stylegan2-ffhq-config-f.pkl \
  --seeds=66,230,389,1518 --truncation-psi=1.0

# Generate uncurated car images
python run_generator.py generate-images --network=gdrive:networks/stylegan2-car-config-f.pkl \
  --seeds=6000-6025 --truncation-psi=0.5

# Example of style mixing (matches the corresponding video clip)
python run_generator.py style-mixing-example --network=gdrive:networks/stylegan2-ffhq-config-f.pkl \
  --row-seeds=85,100,75,458,1500 --col-seeds=55,821,1789,293 --truncation-psi=1.0

The results are placed in results/<RUNNING_ID>/*.png. You can change the location with --result-dir. For example, --result-dir=~/my-stylegan2-results.

You can import the networks in your own Python code using pickle.load(). For this to work, you need to include the dnnlib source directory in PYTHONPATH and create a default TensorFlow session by calling dnnlib.tflib.init_tf(). See run_generator.py and pretrained_networks.py for examples.

Preparing datasets

Datasets are stored as multi-resolution TFRecords, similar to the original StyleGAN. Each dataset consists of multiple *.tfrecords files stored under a common directory, e.g., ~/datasets/ffhq/ffhq-r*.tfrecords. In the following sections, the datasets are referenced using a combination of --dataset and --data-dir arguments, e.g., --dataset=ffhq --data-dir=~/datasets.

FFHQ. To download the Flickr-Faces-HQ dataset as multi-resolution TFRecords, run:

pushd ~
git clone https://github.com/NVlabs/ffhq-dataset.git
cd ffhq-dataset
python download_ffhq.py --tfrecords
popd
python dataset_tool.py display ~/ffhq-dataset/tfrecords/ffhq

LSUN. Download the desired LSUN categories in LMDB format from the LSUN project page. To convert the data to multi-resolution TFRecords, run:

python dataset_tool.py create_lsun_wide ~/datasets/car ~/lsun/car_lmdb --width=512 --height=384
python dataset_tool.py create_lsun ~/datasets/cat ~/lsun/cat_lmdb --resolution=256
python dataset_tool.py create_lsun ~/datasets/church ~/lsun/church_outdoor_train_lmdb --resolution=256
python dataset_tool.py create_lsun ~/datasets/horse ~/lsun/horse_lmdb --resolution=256

Custom. Create custom datasets by placing all training images under a single directory. The images must be square-shaped and they must all have the same power-of-two dimensions. To convert the images to multi-resolution TFRecords, run:

python dataset_tool.py create_from_images ~/datasets/my-custom-dataset ~/my-custom-images
python dataset_tool.py display ~/datasets/my-custom-dataset

Projecting images to latent space

To find the matching latent vectors for a set of images, run:

# Project generated images
python run_projector.py project-generated-images --network=gdrive:networks/stylegan2-car-config-f.pkl \
  --seeds=0,1,5

# Project real images
python run_projector.py project-real-images --network=gdrive:networks/stylegan2-car-config-f.pkl \
  --dataset=car --data-dir=~/datasets

Training networks

To reproduce the training runs for config F in Tables 1 and 3, run:

python run_training.py --num-gpus=8 --data-dir=~/datasets --config=config-f \
  --dataset=ffhq --mirror-augment=true
python run_training.py --num-gpus=8 --data-dir=~/datasets --config=config-f \
  --dataset=car --total-kimg=57000
python run_training.py --num-gpus=8 --data-dir=~/datasets --config=config-f \
  --dataset=cat --total-kimg=88000
python run_training.py --num-gpus=8 --data-dir=~/datasets --config=config-f \
  --dataset=church --total-kimg 88000 --gamma=100
python run_training.py --num-gpus=8 --data-dir=~/datasets --config=config-f \
  --dataset=horse --total-kimg 100000 --gamma=100

For other configurations, see python run_training.py --help.

We have verified that the results match the paper when training with 1, 2, 4, or 8 GPUs. Note that training FFHQ at 1024×1024 resolution requires GPU(s) with at least 16 GB of memory. The following table lists typical training times using NVIDIA DGX-1 with 8 Tesla V100 GPUs:

ConfigurationResolutionTotal kimg1 GPU2 GPUs4 GPUs8 GPUsGPU mem
config-f1024×10242500069d 23h36d 4h18d 14h9d 18h13.3 GB
config-f1024×10241000027d 23h14d 11h7d 10h3d 22h13.3 GB
config-e1024×10242500035d 11h18d 15h9d 15h5d 6h8.6 GB
config-e1024×10241000014d 4h7d 11h3d 20h2d 3h8.6 GB
config-f256×2562500032d 13h16d 23h8d 21h4d 18h6.4 GB
config-f256×2561000013d 0h6d 19h3d 13h1d 22h6.4 GB

Training curves for FFHQ config F (StyleGAN2) compared to original StyleGAN using 8 GPUs:

Training curves

After training, the resulting networks can be used the same way as the official pre-trained networks:

# Generate 1000 random images without truncation
python run_generator.py generate-images --seeds=0-999 --truncation-psi=1.0 \
  --network=results/00006-stylegan2-ffhq-8gpu-config-f/networks-final.pkl

Evaluation metrics

To reproduce the numbers for config F in Tables 1 and 3, run:

python run_metrics.py --data-dir=~/datasets --network=gdrive:networks/stylegan2-ffhq-config-f.pkl \
  --metrics=fid50k,ppl_wend --dataset=ffhq --mirror-augment=true
python run_metrics.py --data-dir=~/datasets --network=gdrive:networks/stylegan2-car-config-f.pkl \
  --metrics=fid50k,ppl2_wend --dataset=car
python run_metrics.py --data-dir=~/datasets --network=gdrive:networks/stylegan2-cat-config-f.pkl \
  --metrics=fid50k,ppl2_wend --dataset=cat
python run_metrics.py --data-dir=~/datasets --network=gdrive:networks/stylegan2-church-config-f.pkl \
  --metrics=fid50k,ppl2_wend --dataset=church
python run_metrics.py --data-dir=~/datasets --network=gdrive:networks/stylegan2-horse-config-f.pkl \
  --metrics=fid50k,ppl2_wend --dataset=horse

For other configurations, see the StyleGAN2 Google Drive folder.

Note that the metrics are evaluated using a different random seed each time, so the results will vary between runs. In the paper, we reported the average result of running each metric 10 times. The following table lists the available metrics along with their expected runtimes and random variation:

MetricFFHQ config F1 GPU2 GPUs4 GPUsDescription
fid50k2.84 ± 0.0322 min14 min10 minFréchet Inception Distance
is50k5.13 ± 0.0223 min14 min8 minInception Score
ppl_zfull348.0 ± 3.841 min22 min14 minPerceptual Path Length in Z, full paths
ppl_wfull126.9 ± 0.242 min22 min13 minPerceptual Path Length in W, full paths
ppl_zend348.6 ± 3.041 min22 min14 minPerceptual Path Length in Z, path endpoints
ppl_wend129.4 ± 0.840 min23 min13 minPerceptual Path Length in W, path endpoints
ppl2_wend145.0 ± 0.541 min23 min14 minPerceptual Path Length without center crop
ls154.2 / 4.2710 hrs6 hrs4 hrsLinear Separability
pr50k30.689 / 0.49226 min17 min12 minPrecision and Recall

Note that some of the metrics cache dataset-specific data on the disk, and they will take somewhat longer when run for the first time.

License

Copyright © 2019, NVIDIA Corporation. All rights reserved.

This work is made available under the Nvidia Source Code License-NC. To view a copy of this license, visit https://nvlabs.github.io/stylegan2/license.html

Citation

@inproceedings{Karras2019stylegan2,
  title     = {Analyzing and Improving the Image Quality of {StyleGAN}},
  author    = {Tero Karras and Samuli Laine and Miika Aittala and Janne Hellsten and Jaakko Lehtinen and Timo Aila},
  booktitle = {Proc. CVPR},
  year      = {2020}
}

Acknowledgements

We thank Ming-Yu Liu for an early review, Timo Viitanen for his help with code release, and Tero Kuosmanen for compute infrastructure.