Top Related Projects
🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.
PyTorch package for the discrete VAE used for DALL·E.
Implementation of Denoising Diffusion Probabilistic Model in Pytorch
Karras et al. (2022) diffusion models for PyTorch
High-Resolution Image Synthesis with Latent Diffusion Models
Quick Overview
Latent Diffusion is a high-resolution image synthesis framework developed by CompVis. It uses a two-stage approach: first compressing images into a latent space, then applying diffusion models in this compressed space. This method allows for efficient generation of high-quality images while reducing computational requirements.
Pros
- Enables high-resolution image generation with reduced computational costs
- Produces high-quality, detailed images
- Versatile framework applicable to various image synthesis tasks
- Allows for controllable image generation through conditioning
Cons
- Requires significant GPU resources for training and inference
- Complex architecture may be challenging to understand and implement
- Limited documentation and examples for beginners
- May require fine-tuning for specific use cases
Code Examples
# Load a pre-trained Latent Diffusion model
model = LatentDiffusion.from_pretrained("CompVis/ldm-text2im-large-256")
# Generate an image from text prompt
prompt = "A beautiful sunset over the ocean"
image = model.generate(prompt)
image.save("sunset.png")
# Perform image-to-image translation
source_image = Image.open("source.png")
target_prompt = "Convert this image to an oil painting style"
translated_image = model.translate(source_image, target_prompt)
translated_image.save("oil_painting.png")
# Fine-tune the model on a custom dataset
custom_dataset = CustomImageDataset("path/to/dataset")
model.fine_tune(custom_dataset, epochs=10, learning_rate=1e-5)
Getting Started
To get started with Latent Diffusion:
- Install the required dependencies:
pip install torch torchvision transformers
git clone https://github.com/CompVis/latent-diffusion.git
cd latent-diffusion
pip install -e .
- Download a pre-trained model:
from latent_diffusion import LatentDiffusion
model = LatentDiffusion.from_pretrained("CompVis/ldm-text2im-large-256")
- Generate an image:
prompt = "A futuristic cityscape at night"
image = model.generate(prompt)
image.save("cityscape.png")
Competitor Comparisons
🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.
Pros of diffusers
- More user-friendly and easier to integrate into existing projects
- Offers a wider range of pre-trained models and pipelines
- Actively maintained with frequent updates and community support
Cons of diffusers
- May have slightly higher computational overhead due to abstraction layers
- Less flexibility for low-level customization of the diffusion process
Code comparison
latent-diffusion:
model = LatentDiffusion(
linear_start=0.00085,
linear_end=0.0120,
num_timesteps=1000,
latent_channels=4,
channels=128,
channel_mult=[1, 2, 4, 4],
)
diffusers:
from diffusers import StableDiffusionPipeline
pipeline = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
image = pipeline("A beautiful sunset over the ocean").images[0]
The latent-diffusion code focuses on model initialization, while diffusers provides a higher-level API for easy use of pre-trained models and generation pipelines.
PyTorch package for the discrete VAE used for DALL·E.
Pros of DALL-E
- More advanced and capable of generating highly detailed and diverse images
- Trained on a larger dataset, resulting in better understanding of complex concepts
- Produces higher resolution outputs with better coherence and consistency
Cons of DALL-E
- Closed-source, limiting accessibility and customization options
- Requires significant computational resources for training and inference
- Less flexibility in terms of model architecture and fine-tuning
Code Comparison
While both repositories are focused on image generation, their codebases differ significantly. Here's a brief comparison of their image generation functions:
DALL-E:
def generate_images(prompt, num_images=1):
return model.generate(prompt, num_images)
Latent-diffusion:
def generate_images(prompt, num_images=1):
latents = model.encode(prompt)
return model.decode(latents, num_samples=num_images)
Latent-diffusion uses a two-step process of encoding and decoding, while DALL-E's approach is more direct. However, due to DALL-E being closed-source, the actual implementation details are not publicly available.
Implementation of Denoising Diffusion Probabilistic Model in Pytorch
Pros of denoising-diffusion-pytorch
- Simpler implementation, making it easier to understand and modify
- More focused on the core diffusion process, without additional complexities
- Potentially faster training and inference due to its streamlined approach
Cons of denoising-diffusion-pytorch
- Less feature-rich compared to latent-diffusion
- May not achieve the same level of image quality as latent-diffusion
- Limited to basic diffusion models without advanced techniques like latent space compression
Code Comparison
latent-diffusion:
def forward(self, x, t, c):
t_emb = timestep_embedding(t, self.model_channels)
emb = self.time_embed(t_emb)
h = x.type(self.dtype)
for module in self.input_blocks:
h = module(h, emb, c)
h = self.middle_block(h, emb, c)
denoising-diffusion-pytorch:
def forward(self, x, time):
x = self.init_conv(x)
r = x.clone()
for block in self.blocks:
x = block(x, time)
return self.final_conv(x) + r
The code comparison shows that latent-diffusion has a more complex forward pass, incorporating conditioning and multiple input blocks, while denoising-diffusion-pytorch has a simpler structure with a single loop over blocks.
Karras et al. (2022) diffusion models for PyTorch
Pros of k-diffusion
- More flexible and customizable diffusion models
- Supports a wider range of sampling methods
- Better performance on certain tasks, especially with fewer sampling steps
Cons of k-diffusion
- Less extensive documentation compared to latent-diffusion
- Smaller community and fewer pre-trained models available
- May require more expertise to implement and fine-tune effectively
Code Comparison
k-diffusion:
model = diffusion.DiffusionModel(
unet, sigma_data=1.0, sigma_min=0.02, sigma_max=100
)
x = torch.randn(4, 3, 64, 64)
x = diffusion.sample(model, x, steps=20, clip_denoised=True)
latent-diffusion:
model = LatentDiffusion.from_pretrained("CompVis/ldm-text2im-large-256")
prompt = ["a painting of a cat"]
images = model(prompt, num_inference_steps=50, guidance_scale=7.5)
Both repositories offer diffusion-based image generation models, but k-diffusion provides more flexibility in model architecture and sampling methods. latent-diffusion, on the other hand, offers a more user-friendly interface and a larger selection of pre-trained models. The code examples demonstrate the different approaches: k-diffusion allows for more low-level control, while latent-diffusion provides a higher-level API for easier use.
High-Resolution Image Synthesis with Latent Diffusion Models
Pros of stablediffusion
- More advanced and refined implementation of the latent diffusion model
- Includes additional features like inpainting and image-to-image translation
- Better documentation and community support
Cons of stablediffusion
- Larger model size, requiring more computational resources
- More complex codebase, potentially harder for beginners to understand
- Stricter licensing terms compared to latent-diffusion
Code Comparison
latent-diffusion:
model = LatentDiffusion(
linear_start=0.0015,
linear_end=0.0195,
n_steps=1000,
latent_channels=4
)
stablediffusion:
model = StableDiffusion(
unet_config={
"target": "ldm.modules.diffusionmodules.openaimodel.UNetModel",
"params": {
"image_size": 32,
"in_channels": 4,
"out_channels": 4,
"model_channels": 320,
"attention_resolutions": [4, 2, 1],
"num_res_blocks": 2,
"channel_mult": [1, 2, 4, 4],
"num_heads": 8,
"use_spatial_transformer": True,
"transformer_depth": 1,
"context_dim": 768,
"use_checkpoint": True,
"legacy": False,
},
},
)
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
Latent Diffusion Models
High-Resolution Image Synthesis with Latent Diffusion Models
Robin Rombach*,
Andreas Blattmann*,
Dominik Lorenz,
Patrick Esser,
Björn Ommer
* equal contribution
News
July 2022
- Inference code and model weights to run our retrieval-augmented diffusion models are now available. See this section.
April 2022
-
Thanks to Katherine Crowson, classifier-free guidance received a ~2x speedup and the PLMS sampler is available. See also this PR.
-
Our 1.45B latent diffusion LAION model was integrated into Huggingface Spaces ð¤ using Gradio. Try out the Web Demo:
-
More pre-trained LDMs are available:
- A 1.45B model trained on the LAION-400M database.
- A class-conditional model on ImageNet, achieving a FID of 3.6 when using classifier-free guidance Available via a colab notebook .
Requirements
A suitable conda environment named ldm
can be created
and activated with:
conda env create -f environment.yaml
conda activate ldm
Pretrained Models
A general list of all available checkpoints is available in via our model zoo. If you use any of these models in your work, we are always happy to receive a citation.
Retrieval Augmented Diffusion Models
We include inference code to run our retrieval-augmented diffusion models (RDMs) as described in https://arxiv.org/abs/2204.11824.
To get started, install the additionally required python packages into your ldm
environment
pip install transformers==4.19.2 scann kornia==0.6.4 torchmetrics==0.6.0
pip install git+https://github.com/arogozhnikov/einops.git
and download the trained weights (preliminary ceckpoints):
mkdir -p models/rdm/rdm768x768/
wget -O models/rdm/rdm768x768/model.ckpt https://ommer-lab.com/files/rdm/model.ckpt
As these models are conditioned on a set of CLIP image embeddings, our RDMs support different inference modes, which are described in the following.
RDM with text-prompt only (no explicit retrieval needed)
Since CLIP offers a shared image/text feature space, and RDMs learn to cover a neighborhood of a given example during training, we can directly take a CLIP text embedding of a given prompt and condition on it. Run this mode via
python scripts/knn2img.py --prompt "a happy bear reading a newspaper, oil on canvas"
RDM with text-to-image retrieval
To be able to run a RDM conditioned on a text-prompt and additionally images retrieved from this prompt, you will also need to download the corresponding retrieval database. We provide two distinct databases extracted from the Openimages- and ArtBench- datasets. Interchanging the databases results in different capabilities of the model as visualized below, although the learned weights are the same in both cases.
Download the retrieval-databases which contain the retrieval-datasets (Openimages (~11GB) and ArtBench (~82MB)) compressed into CLIP image embeddings:
mkdir -p data/rdm/retrieval_databases
wget -O data/rdm/retrieval_databases/artbench.zip https://ommer-lab.com/files/rdm/artbench_databases.zip
wget -O data/rdm/retrieval_databases/openimages.zip https://ommer-lab.com/files/rdm/openimages_database.zip
unzip data/rdm/retrieval_databases/artbench.zip -d data/rdm/retrieval_databases/
unzip data/rdm/retrieval_databases/openimages.zip -d data/rdm/retrieval_databases/
We also provide trained ScaNN search indices for ArtBench. Download and extract via
mkdir -p data/rdm/searchers
wget -O data/rdm/searchers/artbench.zip https://ommer-lab.com/files/rdm/artbench_searchers.zip
unzip data/rdm/searchers/artbench.zip -d data/rdm/searchers
Since the index for OpenImages is large (~21 GB), we provide a script to create and save it for usage during sampling. Note however, that sampling with the OpenImages database will not be possible without this index. Run the script via
python scripts/train_searcher.py
Retrieval based text-guided sampling with visual nearest neighbors can be started via
python scripts/knn2img.py --prompt "a happy pineapple" --use_neighbors --knn <number_of_neighbors>
Note that the maximum supported number of neighbors is 20.
The database can be changed via the cmd parameter --database
which can be [openimages, artbench-art_nouveau, artbench-baroque, artbench-expressionism, artbench-impressionism, artbench-post_impressionism, artbench-realism, artbench-renaissance, artbench-romanticism, artbench-surrealism, artbench-ukiyo_e]
.
For using --database openimages
, the above script (scripts/train_searcher.py
) must be executed before.
Due to their relatively small size, the artbench datasetbases are best suited for creating more abstract concepts and do not work well for detailed text control.
Coming Soon
- better models
- more resolutions
- image-to-image retrieval
Text-to-Image
Download the pre-trained weights (5.7GB)
mkdir -p models/ldm/text2img-large/
wget -O models/ldm/text2img-large/model.ckpt https://ommer-lab.com/files/latent-diffusion/nitro/txt2img-f8-large/model.ckpt
and sample with
python scripts/txt2img.py --prompt "a virus monster is playing guitar, oil on canvas" --ddim_eta 0.0 --n_samples 4 --n_iter 4 --scale 5.0 --ddim_steps 50
This will save each sample individually as well as a grid of size n_iter
x n_samples
at the specified output location (default: outputs/txt2img-samples
).
Quality, sampling speed and diversity are best controlled via the scale
, ddim_steps
and ddim_eta
arguments.
As a rule of thumb, higher values of scale
produce better samples at the cost of a reduced output diversity.
Furthermore, increasing ddim_steps
generally also gives higher quality samples, but returns are diminishing for values > 250.
Fast sampling (i.e. low values of ddim_steps
) while retaining good quality can be achieved by using --ddim_eta 0.0
.
Faster sampling (i.e. even lower values of ddim_steps
) while retaining good quality can be achieved by using --ddim_eta 0.0
and --plms
(see Pseudo Numerical Methods for Diffusion Models on Manifolds).
Beyond 256²
For certain inputs, simply running the model in a convolutional fashion on larger features than it was trained on
can sometimes result in interesting results. To try it out, tune the H
and W
arguments (which will be integer-divided
by 8 in order to calculate the corresponding latent size), e.g. run
python scripts/txt2img.py --prompt "a sunset behind a mountain range, vector image" --ddim_eta 1.0 --n_samples 1 --n_iter 1 --H 384 --W 1024 --scale 5.0
to create a sample of size 384x1024. Note, however, that controllability is reduced compared to the 256x256 setting.
The example below was generated using the above command.
Inpainting
Download the pre-trained weights
wget -O models/ldm/inpainting_big/last.ckpt https://heibox.uni-heidelberg.de/f/4d9ac7ea40c64582b7c9/?dl=1
and sample with
python scripts/inpaint.py --indir data/inpainting_examples/ --outdir outputs/inpainting_results
indir
should contain images *.png
and masks <image_fname>_mask.png
like
the examples provided in data/inpainting_examples
.
Class-Conditional ImageNet
Available via a notebook .
Unconditional Models
We also provide a script for sampling from unconditional LDMs (e.g. LSUN, FFHQ, ...). Start it via
CUDA_VISIBLE_DEVICES=<GPU_ID> python scripts/sample_diffusion.py -r models/ldm/<model_spec>/model.ckpt -l <logdir> -n <\#samples> --batch_size <batch_size> -c <\#ddim steps> -e <\#eta>
Train your own LDMs
Data preparation
Faces
For downloading the CelebA-HQ and FFHQ datasets, proceed as described in the taming-transformers repository.
LSUN
The LSUN datasets can be conveniently downloaded via the script available here.
We performed a custom split into training and validation images, and provide the corresponding filenames
at https://ommer-lab.com/files/lsun.zip.
After downloading, extract them to ./data/lsun
. The beds/cats/churches subsets should
also be placed/symlinked at ./data/lsun/bedrooms
/./data/lsun/cats
/./data/lsun/churches
, respectively.
ImageNet
The code will try to download (through Academic
Torrents) and prepare ImageNet the first time it
is used. However, since ImageNet is quite large, this requires a lot of disk
space and time. If you already have ImageNet on your disk, you can speed things
up by putting the data into
${XDG_CACHE}/autoencoders/data/ILSVRC2012_{split}/data/
(which defaults to
~/.cache/autoencoders/data/ILSVRC2012_{split}/data/
), where {split}
is one
of train
/validation
. It should have the following structure:
${XDG_CACHE}/autoencoders/data/ILSVRC2012_{split}/data/
âââ n01440764
â âââ n01440764_10026.JPEG
â âââ n01440764_10027.JPEG
â âââ ...
âââ n01443537
â âââ n01443537_10007.JPEG
â âââ n01443537_10014.JPEG
â âââ ...
âââ ...
If you haven't extracted the data, you can also place
ILSVRC2012_img_train.tar
/ILSVRC2012_img_val.tar
(or symlinks to them) into
${XDG_CACHE}/autoencoders/data/ILSVRC2012_train/
/
${XDG_CACHE}/autoencoders/data/ILSVRC2012_validation/
, which will then be
extracted into above structure without downloading it again. Note that this
will only happen if neither a folder
${XDG_CACHE}/autoencoders/data/ILSVRC2012_{split}/data/
nor a file
${XDG_CACHE}/autoencoders/data/ILSVRC2012_{split}/.ready
exist. Remove them
if you want to force running the dataset preparation again.
Model Training
Logs and checkpoints for trained models are saved to logs/<START_DATE_AND_TIME>_<config_spec>
.
Training autoencoder models
Configs for training a KL-regularized autoencoder on ImageNet are provided at configs/autoencoder
.
Training can be started by running
CUDA_VISIBLE_DEVICES=<GPU_ID> python main.py --base configs/autoencoder/<config_spec>.yaml -t --gpus 0,
where config_spec
is one of {autoencoder_kl_8x8x64
(f=32, d=64), autoencoder_kl_16x16x16
(f=16, d=16),
autoencoder_kl_32x32x4
(f=8, d=4), autoencoder_kl_64x64x3
(f=4, d=3)}.
For training VQ-regularized models, see the taming-transformers repository.
Training LDMs
In configs/latent-diffusion/
we provide configs for training LDMs on the LSUN-, CelebA-HQ, FFHQ and ImageNet datasets.
Training can be started by running
CUDA_VISIBLE_DEVICES=<GPU_ID> python main.py --base configs/latent-diffusion/<config_spec>.yaml -t --gpus 0,
where <config_spec>
is one of {celebahq-ldm-vq-4
(f=4, VQ-reg. autoencoder, spatial size 64x64x3),ffhq-ldm-vq-4
(f=4, VQ-reg. autoencoder, spatial size 64x64x3),
lsun_bedrooms-ldm-vq-4
(f=4, VQ-reg. autoencoder, spatial size 64x64x3),
lsun_churches-ldm-vq-4
(f=8, KL-reg. autoencoder, spatial size 32x32x4),cin-ldm-vq-8
(f=8, VQ-reg. autoencoder, spatial size 32x32x4)}.
Model Zoo
Pretrained Autoencoding Models
All models were trained until convergence (no further substantial improvement in rFID).
Model | rFID vs val | train steps | PSNR | PSIM | Link | Comments |
---|---|---|---|---|---|---|
f=4, VQ (Z=8192, d=3) | 0.58 | 533066 | 27.43 +/- 4.26 | 0.53 +/- 0.21 | https://ommer-lab.com/files/latent-diffusion/vq-f4.zip | |
f=4, VQ (Z=8192, d=3) | 1.06 | 658131 | 25.21 +/- 4.17 | 0.72 +/- 0.26 | https://heibox.uni-heidelberg.de/f/9c6681f64bb94338a069/?dl=1 | no attention |
f=8, VQ (Z=16384, d=4) | 1.14 | 971043 | 23.07 +/- 3.99 | 1.17 +/- 0.36 | https://ommer-lab.com/files/latent-diffusion/vq-f8.zip | |
f=8, VQ (Z=256, d=4) | 1.49 | 1608649 | 22.35 +/- 3.81 | 1.26 +/- 0.37 | https://ommer-lab.com/files/latent-diffusion/vq-f8-n256.zip | |
f=16, VQ (Z=16384, d=8) | 5.15 | 1101166 | 20.83 +/- 3.61 | 1.73 +/- 0.43 | https://heibox.uni-heidelberg.de/f/0e42b04e2e904890a9b6/?dl=1 | |
f=4, KL | 0.27 | 176991 | 27.53 +/- 4.54 | 0.55 +/- 0.24 | https://ommer-lab.com/files/latent-diffusion/kl-f4.zip | |
f=8, KL | 0.90 | 246803 | 24.19 +/- 4.19 | 1.02 +/- 0.35 | https://ommer-lab.com/files/latent-diffusion/kl-f8.zip | |
f=16, KL (d=16) | 0.87 | 442998 | 24.08 +/- 4.22 | 1.07 +/- 0.36 | https://ommer-lab.com/files/latent-diffusion/kl-f16.zip | |
f=32, KL (d=64) | 2.04 | 406763 | 22.27 +/- 3.93 | 1.41 +/- 0.40 | https://ommer-lab.com/files/latent-diffusion/kl-f32.zip |
Get the models
Running the following script downloads und extracts all available pretrained autoencoding models.
bash scripts/download_first_stages.sh
The first stage models can then be found in models/first_stage_models/<model_spec>
Pretrained LDMs
Datset | Task | Model | FID | IS | Prec | Recall | Link | Comments |
---|---|---|---|---|---|---|---|---|
CelebA-HQ | Unconditional Image Synthesis | LDM-VQ-4 (200 DDIM steps, eta=0) | 5.11 (5.11) | 3.29 | 0.72 | 0.49 | https://ommer-lab.com/files/latent-diffusion/celeba.zip | |
FFHQ | Unconditional Image Synthesis | LDM-VQ-4 (200 DDIM steps, eta=1) | 4.98 (4.98) | 4.50 (4.50) | 0.73 | 0.50 | https://ommer-lab.com/files/latent-diffusion/ffhq.zip | |
LSUN-Churches | Unconditional Image Synthesis | LDM-KL-8 (400 DDIM steps, eta=0) | 4.02 (4.02) | 2.72 | 0.64 | 0.52 | https://ommer-lab.com/files/latent-diffusion/lsun_churches.zip | |
LSUN-Bedrooms | Unconditional Image Synthesis | LDM-VQ-4 (200 DDIM steps, eta=1) | 2.95 (3.0) | 2.22 (2.23) | 0.66 | 0.48 | https://ommer-lab.com/files/latent-diffusion/lsun_bedrooms.zip | |
ImageNet | Class-conditional Image Synthesis | LDM-VQ-8 (200 DDIM steps, eta=1) | 7.77(7.76)* /15.82** | 201.56(209.52)* /78.82** | 0.84* / 0.65** | 0.35* / 0.63** | https://ommer-lab.com/files/latent-diffusion/cin.zip | *: w/ guiding, classifier_scale 10 **: w/o guiding, scores in bracket calculated with script provided by ADM |
Conceptual Captions | Text-conditional Image Synthesis | LDM-VQ-f4 (100 DDIM steps, eta=0) | 16.79 | 13.89 | N/A | N/A | https://ommer-lab.com/files/latent-diffusion/text2img.zip | finetuned from LAION |
OpenImages | Super-resolution | LDM-VQ-4 | N/A | N/A | N/A | N/A | https://ommer-lab.com/files/latent-diffusion/sr_bsr.zip | BSR image degradation |
OpenImages | Layout-to-Image Synthesis | LDM-VQ-4 (200 DDIM steps, eta=0) | 32.02 | 15.92 | N/A | N/A | https://ommer-lab.com/files/latent-diffusion/layout2img_model.zip | |
Landscapes | Semantic Image Synthesis | LDM-VQ-4 | N/A | N/A | N/A | N/A | https://ommer-lab.com/files/latent-diffusion/semantic_synthesis256.zip | |
Landscapes | Semantic Image Synthesis | LDM-VQ-4 | N/A | N/A | N/A | N/A | https://ommer-lab.com/files/latent-diffusion/semantic_synthesis.zip | finetuned on resolution 512x512 |
Get the models
The LDMs listed above can jointly be downloaded and extracted via
bash scripts/download_models.sh
The models can then be found in models/ldm/<model_spec>
.
Coming Soon...
- More inference scripts for conditional LDMs.
- In the meantime, you can play with our colab notebook https://colab.research.google.com/drive/1xqzUi2iXQXDqXBHQGP9Mqt2YrYW6cx-J?usp=sharing
Comments
-
Our codebase for the diffusion models builds heavily on OpenAI's ADM codebase and https://github.com/lucidrains/denoising-diffusion-pytorch. Thanks for open-sourcing!
-
The implementation of the transformer encoder is from x-transformers by lucidrains.
BibTeX
@misc{rombach2021highresolution,
title={High-Resolution Image Synthesis with Latent Diffusion Models},
author={Robin Rombach and Andreas Blattmann and Dominik Lorenz and Patrick Esser and Björn Ommer},
year={2021},
eprint={2112.10752},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
@misc{https://doi.org/10.48550/arxiv.2204.11824,
doi = {10.48550/ARXIV.2204.11824},
url = {https://arxiv.org/abs/2204.11824},
author = {Blattmann, Andreas and Rombach, Robin and Oktay, Kaan and Ommer, Björn},
keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Retrieval-Augmented Diffusion Models},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
Top Related Projects
🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.
PyTorch package for the discrete VAE used for DALL·E.
Implementation of Denoising Diffusion Probabilistic Model in Pytorch
Karras et al. (2022) diffusion models for PyTorch
High-Resolution Image Synthesis with Latent Diffusion Models
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot