Convert Figma logo to code with AI

huggingface logodiffusers

🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.

25,061
5,182
25,061
652

Top Related Projects

High-Resolution Image Synthesis with Latent Diffusion Models

Stable Diffusion web UI

23,395

Invoke is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, and serves as the foundation for multiple commercial products.

Generative Models by Stability AI

Quick Overview

Hugging Face's Diffusers is a state-of-the-art library for diffusion models in computer vision and audio. It provides pre-trained models, training utilities, and inference pipelines for various diffusion-based generative AI tasks, including image generation, inpainting, and audio synthesis.

Pros

  • Extensive collection of pre-trained diffusion models
  • Easy-to-use API for both inference and fine-tuning
  • Seamless integration with other Hugging Face libraries
  • Active development and community support

Cons

  • Can be computationally intensive, requiring significant GPU resources
  • Learning curve for understanding diffusion models and their parameters
  • Limited documentation for some advanced features
  • Dependency on other libraries may lead to version conflicts

Code Examples

  1. Basic image generation using Stable Diffusion:
from diffusers import StableDiffusionPipeline
import torch

pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16)
pipe = pipe.to("cuda")

prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]
image.save("astronaut_rides_horse.png")
  1. Image-to-image generation with Stable Diffusion:
from diffusers import StableDiffusionImg2ImgPipeline
from PIL import Image
import torch

pipe = StableDiffusionImg2ImgPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16)
pipe = pipe.to("cuda")

init_image = Image.open("path_to_initial_image.png").convert("RGB")
prompt = "A fantasy landscape with a castle"
image = pipe(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5).images[0]
image.save("fantasy_landscape.png")
  1. Audio generation using Audio Diffusion:
from diffusers import AudioDiffusionPipeline
import torch

pipe = AudioDiffusionPipeline.from_pretrained("teticio/audio-diffusion-256", torch_dtype=torch.float16)
pipe = pipe.to("cuda")

audio = pipe(batch_size=1, num_inference_steps=50).audios[0]
audio.save("generated_audio.wav")

Getting Started

To get started with Diffusers, follow these steps:

  1. Install the library:
pip install diffusers transformers accelerate
  1. Import and use a pre-trained model:
from diffusers import StableDiffusionPipeline
import torch

pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16)
pipe = pipe.to("cuda")

prompt = "a beautiful sunset over the ocean"
image = pipe(prompt).images[0]
image.save("sunset.png")

This example loads a pre-trained Stable Diffusion model and generates an image based on the given prompt.

Competitor Comparisons

High-Resolution Image Synthesis with Latent Diffusion Models

Pros of stablediffusion

  • More focused on the Stable Diffusion model, providing specialized tools and optimizations
  • Offers direct integration with Stability AI's ecosystem and services
  • Includes advanced features like textual inversion and custom pipelines

Cons of stablediffusion

  • Less versatile compared to diffusers, which supports multiple models and architectures
  • May have a steeper learning curve for beginners due to its more specialized nature
  • Potentially slower update cycle for new features and models

Code Comparison

stablediffusion:

import torch
from ldm.util import instantiate_from_config
from omegaconf import OmegaConf

config = OmegaConf.load("configs/stable-diffusion/v1-inference.yaml")
model = instantiate_from_config(config.model)

diffusers:

from diffusers import StableDiffusionPipeline

pipeline = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
image = pipeline("A beautiful sunset over the ocean").images[0]

The stablediffusion repository provides lower-level access to the model, while diffusers offers a more user-friendly API with pre-built pipelines. diffusers supports a wider range of models and use cases, making it more versatile for general use, while stablediffusion may be preferred for advanced users focusing specifically on Stable Diffusion.

Stable Diffusion web UI

Pros of stable-diffusion-webui

  • User-friendly web interface for easy interaction with Stable Diffusion models
  • Extensive features including inpainting, outpainting, and various image processing tools
  • Active community with frequent updates and extensions

Cons of stable-diffusion-webui

  • Less flexible for integration into custom applications or workflows
  • Primarily focused on image generation, with limited support for other diffusion tasks
  • Steeper learning curve for developers wanting to modify or extend core functionality

Code Comparison

diffusers:

from diffusers import StableDiffusionPipeline

pipeline = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
image = pipeline("A beautiful sunset over the ocean").images[0]
image.save("sunset.png")

stable-diffusion-webui:

import modules.scripts as scripts
import gradio as gr

class ExampleScript(scripts.Script):
    def title(self):
        return "Example Script"

    def ui(self, is_img2img):
        return [gr.Textbox(label="Prompt")]

    def run(self, p, prompt):
        p.prompt = prompt
        return p
23,395

Invoke is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, and serves as the foundation for multiple commercial products.

Pros of InvokeAI

  • More user-friendly interface with a web UI and CLI options
  • Specialized features for image generation, including inpainting and outpainting
  • Active community with frequent updates and contributions

Cons of InvokeAI

  • Less flexible for general machine learning tasks
  • Steeper learning curve for developers new to image generation
  • More resource-intensive due to its comprehensive features

Code Comparison

InvokeAI:

from invokeai.app.invocations.baseinvocation import BaseInvocation

class CustomInvocation(BaseInvocation):
    def invoke(self, context):
        # Custom image generation logic here

Diffusers:

from diffusers import StableDiffusionPipeline

pipeline = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
image = pipeline("A beautiful sunset over the ocean").images[0]

InvokeAI focuses on providing a complete image generation solution with a user-friendly interface, while Diffusers offers a more flexible and modular approach for various diffusion models. InvokeAI is better suited for end-users and artists, whereas Diffusers is more adaptable for researchers and developers working on diverse machine learning projects.

Generative Models by Stability AI

Pros of generative-models

  • Focuses specifically on Stability AI's models, offering deeper integration and optimization
  • Provides more direct access to cutting-edge generative AI research from Stability AI
  • Includes specialized tools and utilities tailored for Stability AI's model architectures

Cons of generative-models

  • Less extensive documentation and community support compared to diffusers
  • Narrower scope, primarily centered around Stability AI's models rather than a broad range of architectures
  • May have a steeper learning curve for users not familiar with Stability AI's specific approaches

Code Comparison

diffusers:

from diffusers import StableDiffusionPipeline

pipeline = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
image = pipeline("A beautiful sunset over the ocean").images[0]

generative-models:

from sgm.inference.api import SamplingPipeline
from sgm.inference.helpers import embed_clip

pipeline = SamplingPipeline.from_pretrained("stabilityai/stable-diffusion-2-1-base")
image = pipeline.text_to_image(embed_clip("A beautiful sunset over the ocean"))

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README



GitHub GitHub release GitHub release Contributor Covenant X account

🤗 Diffusers is the go-to library for state-of-the-art pretrained diffusion models for generating images, audio, and even 3D structures of molecules. Whether you're looking for a simple inference solution or training your own diffusion models, 🤗 Diffusers is a modular toolbox that supports both. Our library is designed with a focus on usability over performance, simple over easy, and customizability over abstractions.

🤗 Diffusers offers three core components:

  • State-of-the-art diffusion pipelines that can be run in inference with just a few lines of code.
  • Interchangeable noise schedulers for different diffusion speeds and output quality.
  • Pretrained models that can be used as building blocks, and combined with schedulers, for creating your own end-to-end diffusion systems.

Installation

We recommend installing 🤗 Diffusers in a virtual environment from PyPI or Conda. For more details about installing PyTorch and Flax, please refer to their official documentation.

PyTorch

With pip (official package):

pip install --upgrade diffusers[torch]

With conda (maintained by the community):

conda install -c conda-forge diffusers

Flax

With pip (official package):

pip install --upgrade diffusers[flax]

Apple Silicon (M1/M2) support

Please refer to the How to use Stable Diffusion in Apple Silicon guide.

Quickstart

Generating outputs is super easy with 🤗 Diffusers. To generate an image from text, use the from_pretrained method to load any pretrained diffusion model (browse the Hub for 30,000+ checkpoints):

from diffusers import DiffusionPipeline
import torch

pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16)
pipeline.to("cuda")
pipeline("An image of a squirrel in Picasso style").images[0]

You can also dig into the models and schedulers toolbox to build your own diffusion system:

from diffusers import DDPMScheduler, UNet2DModel
from PIL import Image
import torch

scheduler = DDPMScheduler.from_pretrained("google/ddpm-cat-256")
model = UNet2DModel.from_pretrained("google/ddpm-cat-256").to("cuda")
scheduler.set_timesteps(50)

sample_size = model.config.sample_size
noise = torch.randn((1, 3, sample_size, sample_size), device="cuda")
input = noise

for t in scheduler.timesteps:
    with torch.no_grad():
        noisy_residual = model(input, t).sample
        prev_noisy_sample = scheduler.step(noisy_residual, t, input).prev_sample
        input = prev_noisy_sample

image = (input / 2 + 0.5).clamp(0, 1)
image = image.cpu().permute(0, 2, 3, 1).numpy()[0]
image = Image.fromarray((image * 255).round().astype("uint8"))
image

Check out the Quickstart to launch your diffusion journey today!

How to navigate the documentation

DocumentationWhat can I learn?
TutorialA basic crash course for learning how to use the library's most important features like using models and schedulers to build your own diffusion system, and training your own diffusion model.
LoadingGuides for how to load and configure all the components (pipelines, models, and schedulers) of the library, as well as how to use different schedulers.
Pipelines for inferenceGuides for how to use pipelines for different inference tasks, batched generation, controlling generated outputs and randomness, and how to contribute a pipeline to the library.
OptimizationGuides for how to optimize your diffusion model to run faster and consume less memory.
TrainingGuides for how to train a diffusion model for different tasks with different training techniques.

Contribution

We ❤️ contributions from the open-source community! If you want to contribute to this library, please check out our Contribution guide. You can look out for issues you'd like to tackle to contribute to the library.

Also, say 👋 in our public Discord channel Join us on Discord. We discuss the hottest trends about diffusion models, help each other with contributions, personal projects or just hang out ☕.

Popular Tasks & Pipelines

Task Pipeline 🤗 Hub
Unconditional Image Generation DDPM google/ddpm-ema-church-256
Text-to-Image Stable Diffusion Text-to-Image runwayml/stable-diffusion-v1-5
Text-to-Image unCLIP kakaobrain/karlo-v1-alpha
Text-to-Image DeepFloyd IF DeepFloyd/IF-I-XL-v1.0
Text-to-Image Kandinsky kandinsky-community/kandinsky-2-2-decoder
Text-guided Image-to-Image ControlNet lllyasviel/sd-controlnet-canny
Text-guided Image-to-Image InstructPix2Pix timbrooks/instruct-pix2pix
Text-guided Image-to-Image Stable Diffusion Image-to-Image runwayml/stable-diffusion-v1-5
Text-guided Image Inpainting Stable Diffusion Inpainting runwayml/stable-diffusion-inpainting
Image Variation Stable Diffusion Image Variation lambdalabs/sd-image-variations-diffusers
Super Resolution Stable Diffusion Upscale stabilityai/stable-diffusion-x4-upscaler
Super Resolution Stable Diffusion Latent Upscale stabilityai/sd-x2-latent-upscaler

Popular libraries using 🧨 Diffusers

Thank you for using us ❤️.

Credits

This library concretizes previous work by many different authors and would not have been possible without their great research and implementations. We'd like to thank, in particular, the following implementations which have helped us in our development and without which the API could not have been as polished today:

  • @CompVis' latent diffusion models library, available here
  • @hojonathanho original DDPM implementation, available here as well as the extremely useful translation into PyTorch by @pesser, available here
  • @ermongroup's DDIM implementation, available here
  • @yang-song's Score-VE and Score-VP implementations, available here

We also want to thank @heejkoo for the very helpful overview of papers, code and resources on diffusion models, available here as well as @crowsonkb and @rromb for useful discussions and insights.

Citation

@misc{von-platen-etal-2022-diffusers,
  author = {Patrick von Platen and Suraj Patil and Anton Lozhkov and Pedro Cuenca and Nathan Lambert and Kashif Rasul and Mishig Davaadorj and Dhruv Nair and Sayak Paul and William Berman and Yiyi Xu and Steven Liu and Thomas Wolf},
  title = {Diffusers: State-of-the-art diffusion models},
  year = {2022},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/huggingface/diffusers}}
}