Top Related Projects
High-Resolution Image Synthesis with Latent Diffusion Models
🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.
Stable Diffusion web UI
T2I-Adapter
Quick Overview
ControlNet is an innovative neural network architecture for adding spatial conditioning to text-to-image diffusion models. It allows for precise control over image generation by incorporating additional input conditions like edge maps, depth maps, or segmentation maps. This enables users to guide the image generation process with more specificity and achieve desired outcomes more consistently.
Pros
- Enhances control and precision in image generation tasks
- Supports various types of conditioning inputs (e.g., edge maps, depth maps, segmentation maps)
- Compatible with popular text-to-image models like Stable Diffusion
- Offers a wide range of applications in creative and professional contexts
Cons
- Requires additional computational resources compared to standard diffusion models
- May have a steeper learning curve for users unfamiliar with advanced image generation techniques
- Limited by the quality and availability of conditioning inputs
- Potential ethical concerns regarding the creation of highly realistic synthetic images
Code Examples
# Loading and applying a ControlNet model
from diffusers import StableDiffusionControlNetPipeline, ControlNetModel
import torch
controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny")
pipe = StableDiffusionControlNetPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16
)
pipe.to("cuda")
# Generate image with edge map control
image = pipe(
"a beautiful landscape",
num_inference_steps=20,
generator=torch.manual_seed(0),
image=edge_image
).images[0]
# Using multiple ControlNet models
controlnet_canny = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny")
controlnet_depth = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-depth")
pipe = StableDiffusionControlNetPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5",
controlnet=[controlnet_canny, controlnet_depth],
torch_dtype=torch.float16
)
# Generate image with multiple controls
image = pipe(
"a futuristic cityscape",
num_inference_steps=20,
generator=torch.manual_seed(0),
image=[canny_image, depth_image]
).images[0]
Getting Started
To get started with ControlNet:
-
Install the required dependencies:
pip install diffusers transformers accelerate
-
Download a pre-trained ControlNet model:
from diffusers import ControlNetModel controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny")
-
Set up the pipeline and generate images:
from diffusers import StableDiffusionControlNetPipeline import torch pipe = StableDiffusionControlNetPipeline.from_pretrained( "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16 ) pipe.to("cuda") # Generate image image = pipe("your prompt here", image=your_control_image).images[0]
Competitor Comparisons
High-Resolution Image Synthesis with Latent Diffusion Models
Pros of stablediffusion
- More comprehensive and versatile, offering a complete text-to-image generation pipeline
- Larger community and more extensive documentation
- Supports multiple model architectures and training techniques
Cons of stablediffusion
- Requires more computational resources and setup time
- Less focused on specific control techniques for image generation
- May be more challenging for beginners to customize and fine-tune
Code Comparison
ControlNet:
from annotator.util import resize_image, HWC3
from cldm.model import create_model, load_state_dict
model = create_model('./models/cldm_v15.yaml').cpu()
model.load_state_dict(load_state_dict('./models/control_sd15_canny.pth', location='cpu'))
stablediffusion:
import torch
from torch import autocast
from diffusers import StableDiffusionPipeline
pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16)
pipe = pipe.to("cuda")
ControlNet focuses on providing fine-grained control over image generation, while stablediffusion offers a more general-purpose text-to-image generation pipeline. ControlNet is better suited for tasks requiring precise control over output images, whereas stablediffusion is more versatile and widely adopted for various image generation tasks.
🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.
Pros of diffusers
- Broader scope, covering various diffusion models and techniques
- Extensive documentation and tutorials for easier adoption
- Seamless integration with other Hugging Face libraries and ecosystem
Cons of diffusers
- Less specialized in image editing and manipulation compared to ControlNet
- May require more setup and configuration for specific tasks
- Potentially slower inference times for certain use cases
Code Comparison
ControlNet example:
from annotator.canny import CannyDetector
from cldm.model import create_model, load_state_dict
model = create_model('./models/cldm_v15.yaml').cpu()
model.load_state_dict(load_state_dict('./models/control_sd15_canny.pth', location='cpu'))
diffusers example:
from diffusers import StableDiffusionPipeline
import torch
pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16)
pipe = pipe.to("cuda")
Stable Diffusion web UI
Pros of stable-diffusion-webui
- More comprehensive and feature-rich UI for Stable Diffusion
- Extensive plugin ecosystem and community support
- Regular updates and active development
Cons of stable-diffusion-webui
- Steeper learning curve due to numerous features
- May require more computational resources
Code Comparison
ControlNet (Python):
def forward(self, x, hint, timesteps, context=None):
t_emb = timestep_embedding(timesteps, self.model_channels, repeat_only=False)
emb = self.time_embed(t_emb)
hint = self.input_hint_block(hint)
return self.input_blocks(x, emb, context, hint)
stable-diffusion-webui (JavaScript):
function createImage(prompt, negativePrompt, steps, cfg_scale, seed) {
return fetch('/sdapi/v1/txt2img', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ prompt, negative_prompt: negativePrompt, steps, cfg_scale, seed })
}).then(response => response.json());
}
T2I-Adapter
Pros of T2I-Adapter
- More lightweight and modular architecture
- Easier to integrate into existing pipelines
- Potentially faster inference times
Cons of T2I-Adapter
- Less extensive documentation and community support
- Fewer pre-trained models available
- May require more fine-tuning for specific use cases
Code Comparison
T2I-Adapter:
adapter = T2IAdapter(input_dim=3, output_dim=768)
adapter_features = adapter(control_image)
image = pipe(prompt, adapter_features=adapter_features).images[0]
ControlNet:
controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny")
image = pipe(prompt, image=control_image, controlnet=controlnet).images[0]
Both ControlNet and T2I-Adapter aim to enhance control over image generation in diffusion models. ControlNet offers a more comprehensive solution with extensive pre-trained models and strong community support. T2I-Adapter, on the other hand, provides a more flexible and lightweight approach that can be easily integrated into existing pipelines. The choice between the two depends on specific project requirements, available resources, and the desired level of control and customization.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
News: A nightly version of ControlNet 1.1 is released!
ControlNet 1.1 is released. Those new models will be merged to this repo after we make sure that everything is good.
Below is ControlNet 1.0
Official implementation of Adding Conditional Control to Text-to-Image Diffusion Models.
ControlNet is a neural network structure to control diffusion models by adding extra conditions.
It copys the weights of neural network blocks into a "locked" copy and a "trainable" copy.
The "trainable" one learns your condition. The "locked" one preserves your model.
Thanks to this, training with small dataset of image pairs will not destroy the production-ready diffusion models.
The "zero convolution" is 1Ã1 convolution with both weight and bias initialized as zeros.
Before training, all zero convolutions output zeros, and ControlNet will not cause any distortion.
No layer is trained from scratch. You are still fine-tuning. Your original model is safe.
This allows training on small-scale or even personal devices.
This is also friendly to merge/replacement/offsetting of models/weights/blocks/layers.
FAQ
Q: But wait, if the weight of a conv layer is zero, the gradient will also be zero, and the network will not learn anything. Why "zero convolution" works?
A: This is not true. See an explanation here.
Stable Diffusion + ControlNet
By repeating the above simple structure 14 times, we can control stable diffusion in this way:
In this way, the ControlNet can reuse the SD encoder as a deep, strong, robust, and powerful backbone to learn diverse controls. Many evidences (like this and this) validate that the SD encoder is an excellent backbone.
Note that the way we connect layers is computational efficient. The original SD encoder does not need to store gradients (the locked original SD Encoder Block 1234 and Middle). The required GPU memory is not much larger than original SD, although many layers are added. Great!
Features & News
2023/0/14 - We released ControlNet 1.1. Those new models will be merged to this repo after we make sure that everything is good.
2023/03/03 - We released a discussion - Precomputed ControlNet: Speed up ControlNet by 45%, but is it necessary?
2023/02/26 - We released a blog - Ablation Study: Why ControlNets use deep encoder? What if it was lighter? Or even an MLP?
2023/02/20 - Implementation for non-prompt mode released. See also Guess Mode / Non-Prompt Mode.
2023/02/12 - Now you can play with any community model by Transferring the ControlNet.
2023/02/11 - Low VRAM mode is added. Please use this mode if you are using 8GB GPU(s) or if you want larger batch size.
Production-Ready Pretrained Models
First create a new conda environment
conda env create -f environment.yaml
conda activate control
All models and detectors can be downloaded from our Hugging Face page. Make sure that SD models are put in "ControlNet/models" and detectors are put in "ControlNet/annotator/ckpts". Make sure that you download all necessary pretrained weights and detector models from that Hugging Face page, including HED edge detection model, Midas depth estimation model, Openpose, and so on.
We provide 9 Gradio apps with these models.
All test images can be found at the folder "test_imgs".
ControlNet with Canny Edge
Stable Diffusion 1.5 + ControlNet (using simple Canny edge detection)
python gradio_canny2image.py
The Gradio app also allows you to change the Canny edge thresholds. Just try it for more details.
Prompt: "bird"
Prompt: "cute dog"
ControlNet with M-LSD Lines
Stable Diffusion 1.5 + ControlNet (using simple M-LSD straight line detection)
python gradio_hough2image.py
The Gradio app also allows you to change the M-LSD thresholds. Just try it for more details.
Prompt: "room"
Prompt: "building"
ControlNet with HED Boundary
Stable Diffusion 1.5 + ControlNet (using soft HED Boundary)
python gradio_hed2image.py
The soft HED Boundary will preserve many details in input images, making this app suitable for recoloring and stylizing. Just try it for more details.
Prompt: "oil painting of handsome old man, masterpiece"
Prompt: "Cyberpunk robot"
ControlNet with User Scribbles
Stable Diffusion 1.5 + ControlNet (using Scribbles)
python gradio_scribble2image.py
Note that the UI is based on Gradio, and Gradio is somewhat difficult to customize. Right now you need to draw scribbles outside the UI (using your favorite drawing software, for example, MS Paint) and then import the scribble image to Gradio.
Prompt: "turtle"
Prompt: "hot air balloon"
Interactive Interface
We actually provide an interactive interface
python gradio_scribble2image_interactive.py
However, because gradio is very buggy and difficult to customize, right now, user need to first set canvas width and heights and then click "Open drawing canvas" to get a drawing area. Please do not upload image to that drawing canvas. Also, the drawing area is very small; it should be bigger. But I failed to find out how to make it larger. Again, gradio is really buggy. (Now fixed, will update asap)
The below dog sketch is drawn by me. Perhaps we should draw a better dog for showcase.
Prompt: "dog in a room"
ControlNet with Fake Scribbles
Stable Diffusion 1.5 + ControlNet (using fake scribbles)
python gradio_fake_scribble2image.py
Sometimes we are lazy, and we do not want to draw scribbles. This script use the exactly same scribble-based model but use a simple algorithm to synthesize scribbles from input images.
Prompt: "bag"
Prompt: "shose" (Note that "shose" is a typo; it should be "shoes". But it still seems to work.)
ControlNet with Human Pose
Stable Diffusion 1.5 + ControlNet (using human pose)
python gradio_pose2image.py
Apparently, this model deserves a better UI to directly manipulate pose skeleton. However, again, Gradio is somewhat difficult to customize. Right now you need to input an image and then the Openpose will detect the pose for you.
Prompt: "Chief in the kitchen"
Prompt: "An astronaut on the moon"
ControlNet with Semantic Segmentation
Stable Diffusion 1.5 + ControlNet (using semantic segmentation)
python gradio_seg2image.py
This model use ADE20K's segmentation protocol. Again, this model deserves a better UI to directly draw the segmentations. However, again, Gradio is somewhat difficult to customize. Right now you need to input an image and then a model called Uniformer will detect the segmentations for you. Just try it for more details.
Prompt: "House"
Prompt: "River"
ControlNet with Depth
Stable Diffusion 1.5 + ControlNet (using depth map)
python gradio_depth2image.py
Great! Now SD 1.5 also have a depth control. FINALLY. So many possibilities (considering SD1.5 has much more community models than SD2).
Note that different from Stability's model, the ControlNet receive the full 512Ã512 depth map, rather than 64Ã64 depth. Note that Stability's SD2 depth model use 64*64 depth maps. This means that the ControlNet will preserve more details in the depth map.
This is always a strength because if users do not want to preserve more details, they can simply use another SD to post-process an i2i. But if they want to preserve more details, ControlNet becomes their only choice. Again, SD2 uses 64Ã64 depth, we use 512Ã512.
Prompt: "Stormtrooper's lecture"
ControlNet with Normal Map
Stable Diffusion 1.5 + ControlNet (using normal map)
python gradio_normal2image.py
This model use normal map. Rightnow in the APP, the normal is computed from the midas depth map and a user threshold (to determine how many area is background with identity normal face to viewer, tune the "Normal background threshold" in the gradio app to get a feeling).
Prompt: "Cute toy"
Prompt: "Plaster statue of Abraham Lincoln"
Compared to depth model, this model seems to be a bit better at preserving the geometry. This is intuitive: minor details are not salient in depth maps, but are salient in normal maps. Below is the depth result with same inputs. You can see that the hairstyle of the man in the input image is modified by depth model, but preserved by the normal model.
Prompt: "Plaster statue of Abraham Lincoln"
ControlNet with Anime Line Drawing
We also trained a relatively simple ControlNet for anime line drawings. This tool may be useful for artistic creations. (Although the image details in the results is a bit modified, since it still diffuse latent images.)
This model is not available right now. We need to evaluate the potential risks before releasing this model. Nevertheless, you may be interested in transferring the ControlNet to any community model.
Guess Mode / Non-Prompt Mode
The "guess mode" (or called non-prompt mode) will completely unleash all the power of the very powerful ControlNet encoder.
See also the blog - Ablation Study: Why ControlNets use deep encoder? What if it was lighter? Or even an MLP?
You need to manually check the "Guess Mode" toggle to enable this mode.
In this mode, the ControlNet encoder will try best to recognize the content of the input control map, like depth map, edge map, scribbles, etc, even if you remove all prompts.
Let's have fun with some very challenging experimental settings!
No prompts. No "positive" prompts. No "negative" prompts. No extra caption detector. One single diffusion loop.
For this mode, we recommend to use 50 steps and guidance scale between 3 and 5.
No prompts:
Note that the below example is 768Ã768. No prompts. No "positive" prompts. No "negative" prompts.
By tuning the parameters, you can get some very intereting results like below:
Because no prompt is available, the ControlNet encoder will "guess" what is in the control map. Sometimes the guess result is really interesting. Because diffusion algorithm can essentially give multiple results, the ControlNet seems able to give multiple guesses, like this:
Without prompt, the HED seems good at generating images look like paintings when the control strength is relatively low:
The Guess Mode is also supported in WebUI Plugin:
No prompts. Default WebUI parameters. Pure random results with the seed being 12345. Standard SD1.5. Input scribble is in "test_imgs" folder to reproduce.
Below is another challenging example:
No prompts. Default WebUI parameters. Pure random results with the seed being 12345. Standard SD1.5. Input scribble is in "test_imgs" folder to reproduce.
Note that in the guess mode, you will still be able to input prompts. The only difference is that the model will "try harder" to guess what is in the control map even if you do not provide the prompt. Just try it yourself!
Besides, if you write some scripts (like BLIP) to generate image captions from the "guess mode" images, and then use the generated captions as prompts to diffuse again, you will get a SOTA pipeline for fully automatic conditional image generating.
Combining Multiple ControlNets
ControlNets are composable: more than one ControlNet can be easily composed to multi-condition control.
Right now this feature is in experimental stage in the Mikubill' A1111 Webui Plugin:
As long as the models are controlling the same SD, the "boundary" between different research projects does not even exist. This plugin also allows different methods to work together!
Use ControlNet in Any Community Model (SD1.X)
This is an experimental feature.
Or you may want to use the Mikubill' A1111 Webui Plugin which is plug-and-play and does not need manual merging.
Annotate Your Own Data
We provide simple python scripts to process images.
Train with Your Own Data
Training a ControlNet is as easy as (or even easier than) training a simple pix2pix.
Related Resources
Special Thank to the great project - Mikubill' A1111 Webui Plugin !
We also thank Hysts for making Hugging Face Space as well as more than 65 models in that amazing Colab list!
Thank haofanwang for making ControlNet-for-Diffusers!
We also thank all authors for making Controlnet DEMOs, including but not limited to fffiloni, other-model, ThereforeGames, RamAnanth1, etc!
Besides, you may also want to read these amazing related works:
Composer: Creative and Controllable Image Synthesis with Composable Conditions: A much bigger model to control diffusion!
T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models: A much smaller model to control stable diffusion!
ControlLoRA: A Light Neural Network To Control Stable Diffusion Spatial Information: Implement Controlnet using LORA!
And these amazing recent projects: InstructPix2Pix Learning to Follow Image Editing Instructions, Pix2pix-zero: Zero-shot Image-to-Image Translation, Plug-and-Play Diffusion Features for Text-Driven Image-to-Image Translation, MaskSketch: Unpaired Structure-guided Masked Image Generation, SEGA: Instructing Diffusion using Semantic Dimensions, Universal Guidance for Diffusion Models, Region-Aware Diffusion for Zero-shot Text-driven Image Editing, Domain Expansion of Image Generators, Image Mixer, MultiDiffusion: Fusing Diffusion Paths for Controlled Image Generation
Citation
@misc{zhang2023adding,
title={Adding Conditional Control to Text-to-Image Diffusion Models},
author={Lvmin Zhang and Anyi Rao and Maneesh Agrawala},
booktitle={IEEE International Conference on Computer Vision (ICCV)}
year={2023},
}
Top Related Projects
High-Resolution Image Synthesis with Latent Diffusion Models
🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.
Stable Diffusion web UI
T2I-Adapter
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot