Convert Figma logo to code with AI

varunshenoy logoopendream

An extensible, easy-to-use, and portable diffusion web UI 👨‍🎨

1,667
71
1,667
11

Top Related Projects

High-Resolution Image Synthesis with Latent Diffusion Models

Stable Diffusion web UI

25,061

🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.

Quick Overview

OpenDream is an open-source implementation of the Stable Diffusion image generation model, designed to run efficiently on consumer GPUs. It aims to provide a more accessible and customizable alternative to proprietary AI image generation services, allowing users to create high-quality images from text prompts on their own hardware.

Pros

  • Runs on consumer-grade GPUs, making AI image generation more accessible
  • Open-source nature allows for customization and community contributions
  • Provides a local alternative to cloud-based image generation services
  • Supports various image generation techniques and models

Cons

  • May require significant GPU resources for optimal performance
  • Limited documentation compared to more established image generation tools
  • Potential for slower image generation compared to cloud-based services
  • Requires technical knowledge to set up and use effectively

Code Examples

# Initialize the OpenDream model
from opendream import OpenDream

model = OpenDream(model_path="path/to/model")

# Generate an image from a text prompt
image = model.generate("A serene landscape with mountains and a lake")
image.save("generated_landscape.png")
# Use advanced settings for image generation
image = model.generate(
    "A futuristic cityscape at night",
    width=768,
    height=512,
    num_inference_steps=50,
    guidance_scale=7.5
)
image.save("futuristic_city.png")
# Generate multiple images from a single prompt
images = model.generate_batch(
    "A cute robot in various poses",
    num_images=4,
    batch_size=2
)
for i, img in enumerate(images):
    img.save(f"robot_pose_{i}.png")

Getting Started

  1. Install OpenDream:

    pip install opendream
    
  2. Download a pre-trained model (e.g., from HuggingFace).

  3. Use OpenDream in your Python script:

    from opendream import OpenDream
    
    model = OpenDream(model_path="path/to/downloaded/model")
    image = model.generate("Your text prompt here")
    image.save("output.png")
    
  4. Experiment with different prompts and settings to generate your desired images.

Competitor Comparisons

High-Resolution Image Synthesis with Latent Diffusion Models

Pros of stablediffusion

  • More comprehensive and feature-rich, offering a wider range of image generation capabilities
  • Larger community support and active development, with frequent updates and improvements
  • Better documentation and examples for easier implementation and usage

Cons of stablediffusion

  • Higher computational requirements and more complex setup process
  • Steeper learning curve for beginners due to its extensive features and options

Code Comparison

stablediffusion:

from diffusers import StableDiffusionPipeline

pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
image = pipe("a photo of an astronaut riding a horse on mars").images[0]
image.save("astronaut_rides_horse.png")

opendream:

from opendream import OpenDream

od = OpenDream()
image = od.generate("a photo of an astronaut riding a horse on mars")
image.save("astronaut_rides_horse.png")

The stablediffusion code offers more flexibility and control over the generation process, while opendream provides a simpler, more straightforward interface for quick image generation.

Stable Diffusion web UI

Pros of stable-diffusion-webui

  • More extensive features and options for image generation and manipulation
  • Larger community and more frequent updates
  • Better support for custom models and extensions

Cons of stable-diffusion-webui

  • Steeper learning curve due to numerous options and settings
  • Requires more computational resources for optimal performance

Code Comparison

opendream:

@app.route("/dream", methods=["POST"])
def dream():
    prompt = request.json["prompt"]
    image = pipe(prompt).images[0]
    return send_file(image, mimetype="image/png")

stable-diffusion-webui:

def txt2img(id_task: str, prompt: str, negative_prompt: str, steps: int, sampler_name: str, ...):
    p = StableDiffusionProcessingTxt2Img(
        sd_model=shared.sd_model,
        outpath_samples=opts.outdir_samples or opts.outdir_txt2img_samples,
        ...
    )
    processed = process_images(p)

The code snippets show that opendream has a simpler API-based approach, while stable-diffusion-webui offers more complex processing with numerous parameters and options.

25,061

🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.

Pros of Diffusers

  • Comprehensive library with support for multiple diffusion models and techniques
  • Extensive documentation and community support
  • Regular updates and maintenance from Hugging Face team

Cons of Diffusers

  • Steeper learning curve due to its extensive features
  • May be overkill for simple projects or specific use cases

Code Comparison

Diffusers:

from diffusers import StableDiffusionPipeline

pipeline = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
image = pipeline("A beautiful sunset over the ocean").images[0]
image.save("sunset.png")

OpenDream:

from opendream import OpenDream

model = OpenDream()
image = model.generate("A beautiful sunset over the ocean")
image.save("sunset.png")

OpenDream offers a simpler API for basic image generation, while Diffusers provides more flexibility and control over the generation process. Diffusers is better suited for advanced users and complex projects, whereas OpenDream may be more accessible for beginners or quick prototyping.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

Opendream: A Web UI For the Rest of Us 💭 🎨

Opendream brings much needed and familiar features, such as layering, non-destructive editing, portability, and easy-to-write extensions, to your Stable Diffusion workflows. Check out our demo video.

hero

Getting started

  1. Prerequisites: Make sure you have Node installed. You can download it here.
  2. Clone this repository.
  3. Navigate to this project within your terminal and run sh ./run_opendream.sh. After ~30 seconds, both the frontend and backend of the Opendream system should be up and running.

Features

Diffusion models have emerged as powerful tools in the world of image generation and manipulation. While they offer significant benefits, these models are often considered black boxes due to their inherent complexity. The current diffusion image generation ecosystem is defined by tools that allow one-off image manipulation tasks to control these models - text2img, in-painting, pix2pix, among others.

For example, popular interfaces like Automatic1111, Midjourney, and Stability.AI's DreamStudio only support destructive editing: each edit "consumes" the previous image. This means users cannot easily build off of previous images or run multiple experiments on the same image, limiting their options for creative exploration.

Layering and Non-destructive Editing

Non-destructive editing is a method of image manipulation that preserves the original image data while allowing users to make adjustments and modifications without overwriting previous work. This approach facilitates experimentation and provides more control over the editing process by using layers and masks. When you delete a layer, all layers after it also get deleted. This guarantees that all layers currently on the canvas are a product of other existing layers. This also allows one to deterministically "replay" a workflow.

Like Photoshop, Opendream supports non-destructive editing out of the box. Learn more about the principles of non-destructive editing in Photoshop here.

layers

Save and Share Workflows

Users can also save their current workflows into a portable file format that can be opened up at a later time or shared with collaborators. In this context, a "state" is just a JSON file describing all of the current layers and how they were created.

workflow

Support Simple to Write, Easy to Install Extensions

As the open-source ecosystem flourishes around these models and tools, extensibility has also become a major concern. While Automatic1111 does offer extensions, they are often difficult to program, use, and install. It is far from being as full-featured as an application like Adobe Photoshop.

As new features for Stable Diffusion, like ControlNet, are released, users should be able to seamlessly integrate them into their artistic workflows with minimal overload and time.

Opendream makes writing and using new diffusion features as simple as writing a Python function. Keep reading to learn how.

Extensions

From the get-go, Opendream supports two key primitive operations baked into the core system: dream and mask_and_inpaint. In this repository, extensions for instruct_pix2pix, controlnet_canny, controlnet_openpose, and sam (Segment Anything) are provided.

Any image manipulation logic can be easily written as an extension. With extensions, you can also decide how certain operations work. For example, you can override the dream operation to use OpenAI's DALL-E instead or call a serverless endpoint on a service like AWS or Replicate. Here's an example using Baseten.

Loading an Existing Extension

There are two ways to load extensions.

  1. Install a pre-written one through the Web UI.
  2. (Manual) Download a valid extension file (or write one yourself!) and add it to the opendream/extensions folder. Instructions for writing your own extension are below.

Here is a sampling of currently supported extensions. You can use the links to install any given extension through the Web UI.

ExtensionLink
OpenAI's DALL-EFile
Serverless Stable DiffusionFile
Instruct Pix2PixFile
ControlNet CannyFile
ControlNet OpenposeFile
Segment AnythingFile
PhotoshopGPTGist

Note that extensions may have their own requirements you would need to include in the requirements.txt file. For example, you would need to add openai if you want to use the DALL-E extension.

Feel free to make a PR if you create a useful extension!

Writing Your Own Extension

Users can write their own extensions as follows:

  1. Create a new Python file in the opendream/extensions folder.
  2. Write a method with type hints and a @opendream.define_op decorator. This decorator registers this method with the Opendream backend.

The method has a few requirements:

  • Parameters must have type hints. These enable the backend to generate a schema for the input which is parsed into form components on the frontend. Valid types include: str, int, float, Layer, MaskLayer, or ImageLayer.
  • The only valid return types are a Layer or a list of Layer objects.

Contributions and Licensing

Opendream was built by Varun Shenoy, Eric Lou, Shashank Rammoorthy, and Rahul Shiv as a part of Stanford's CS 348K.

Feel free to provide any contributions you deem necessary or useful. This project is licensed under the MIT License.