Convert Figma logo to code with AI

invoke-ai logoInvokeAI

Invoke is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, and serves as the foundation for multiple commercial products.

23,395
2,407
23,395
552

Top Related Projects

23,397

Invoke is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, and serves as the foundation for multiple commercial products.

Stable Diffusion web UI

High-Resolution Image Synthesis with Latent Diffusion Models

25,061

🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.

Quick Overview

InvokeAI is an open-source project that provides a leading creative engine for Stable Diffusion models. It offers a powerful, user-friendly interface for generating, editing, and manipulating images using various AI models. The project aims to make advanced AI image generation accessible to both beginners and experienced users.

Pros

  • Intuitive and feature-rich web interface for easy image generation and manipulation
  • Supports multiple Stable Diffusion models and custom model integration
  • Offers advanced features like inpainting, outpainting, and image-to-image generation
  • Active community and regular updates with new features and improvements

Cons

  • Requires significant computational resources, especially GPU, for optimal performance
  • Installation process can be complex for users unfamiliar with Python environments
  • Learning curve for advanced features and optimal prompt engineering
  • Dependency on external models and occasional compatibility issues with new model versions

Code Examples

# Initialize InvokeAI pipeline
from invokeai.app.invocations.primitives import ImageField, ImageOutput
from invokeai.app.invocations.stable_diffusion import SDXLTextToImageInvocation

pipeline = SDXLTextToImageInvocation(
    prompt="A serene landscape with mountains and a lake",
    width=1024,
    height=768,
    steps=30,
    cfg_scale=7.5
)

# Generate image
result = pipeline.invoke()
output_image = result.image
# Perform inpainting on an existing image
from invokeai.app.invocations.stable_diffusion import SDXLInpaintInvocation

inpaint_pipeline = SDXLInpaintInvocation(
    prompt="Add a boat on the lake",
    image=ImageField(image_path="path/to/original_image.png"),
    mask=ImageField(image_path="path/to/mask.png"),
    strength=0.8
)

inpainted_result = inpaint_pipeline.invoke()
inpainted_image = inpainted_result.image
# Use image-to-image generation
from invokeai.app.invocations.stable_diffusion import SDXLImageToImageInvocation

img2img_pipeline = SDXLImageToImageInvocation(
    prompt="Transform the landscape into a winter scene",
    image=ImageField(image_path="path/to/input_image.png"),
    strength=0.75
)

img2img_result = img2img_pipeline.invoke()
transformed_image = img2img_result.image

Getting Started

  1. Install InvokeAI:

    pip install invokeai
    
  2. Download and set up models:

    invokeai-configure --quick-setup
    
  3. Launch the web interface:

    invokeai-web
    
  4. Access the interface in your web browser at http://localhost:9090

Competitor Comparisons

23,397

Invoke is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, and serves as the foundation for multiple commercial products.

Pros of InvokeAI

  • Comprehensive AI image generation toolkit
  • Active development and regular updates
  • Extensive documentation and community support

Cons of InvokeAI

  • Steeper learning curve for beginners
  • Resource-intensive, requiring powerful hardware

Code Comparison

InvokeAI:

from invokeai.app.services.image_records.image_records_common import ImageCategory, ResourceOrigin
from invokeai.app.invocations.baseinvocation import BaseInvocation, InputField, InvocationContext, invocation

@invocation("get_image_metadata", title="Get Image Metadata", tags=["image", "metadata"])
class GetImageMetadataInvocation(BaseInvocation):
    image_name: str = InputField(description="Name of the image")

As both repositories are the same project (InvokeAI), there is no direct code comparison to be made. The code snippet above is an example of how invocations are defined in the InvokeAI project.

InvokeAI is a powerful and flexible AI image generation toolkit that offers a wide range of features and capabilities. It provides a comprehensive set of tools for creating, manipulating, and enhancing AI-generated images. The project is actively maintained and regularly updated, ensuring users have access to the latest advancements in AI image generation technology.

While InvokeAI offers extensive functionality, it may have a steeper learning curve for beginners compared to simpler alternatives. Additionally, due to its advanced features, it can be resource-intensive and may require more powerful hardware to run efficiently.

Stable Diffusion web UI

Pros of stable-diffusion-webui

  • More extensive feature set and customization options
  • Larger community with frequent updates and extensions
  • Better performance for generating images on consumer-grade hardware

Cons of stable-diffusion-webui

  • Steeper learning curve due to complex interface and numerous options
  • Less focus on code quality and maintainability
  • May require more manual setup and configuration

Code Comparison

InvokeAI:

def generate_image(prompt, seed=None, width=512, height=512):
    generator = InvokeAIGenerator()
    image = generator.generate(prompt, seed, width, height)
    return image

stable-diffusion-webui:

def generate_image(prompt, seed=None, width=512, height=512):
    p = StableDiffusionProcessing(
        sd_model=shared.sd_model,
        prompt=prompt,
        seed=seed,
        width=width,
        height=height
    )
    processed = processing.process_images(p)
    return processed.images[0]

Both repositories offer powerful tools for generating images using Stable Diffusion models. InvokeAI focuses on a more streamlined, user-friendly experience with cleaner code, while stable-diffusion-webui provides a feature-rich environment with extensive customization options at the cost of increased complexity.

High-Resolution Image Synthesis with Latent Diffusion Models

Pros of stablediffusion

  • More focused on core stable diffusion implementation
  • Potentially better performance due to specialized codebase
  • Closer integration with Stability AI's research and updates

Cons of stablediffusion

  • Less user-friendly interface compared to InvokeAI
  • Fewer built-in features and tools for image generation
  • May require more technical knowledge to use effectively

Code Comparison

InvokeAI:

from invokeai.app.invocations.primitives import ImageField
from invokeai.app.invocations.baseinvocation import BaseInvocation

class MyCustomInvocation(BaseInvocation):
    image: ImageField = InputField(description="Input image")
    # ... more custom logic

stablediffusion:

import torch
from torch import autocast
from diffusers import StableDiffusionPipeline

pipe = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1")
prompt = "a photo of an astronaut riding a horse on mars"
# ... generate image using the pipeline

The code comparison shows that InvokeAI provides a more abstracted and user-friendly API for creating custom invocations, while stablediffusion offers a lower-level implementation that may provide more flexibility for advanced users.

25,061

🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.

Pros of Diffusers

  • Broader scope, supporting various diffusion models beyond just image generation
  • More extensive documentation and tutorials
  • Larger community and more frequent updates

Cons of Diffusers

  • Steeper learning curve for beginners
  • Less focus on user-friendly interfaces and tools

Code Comparison

InvokeAI example:

from invokeai.app.invocations.primitives import ImageField
from invokeai.app.invocations.stable_diffusion import SDXLTextToImageInvocation

result = SDXLTextToImageInvocation(
    prompt="A beautiful sunset over the ocean",
    width=1024,
    height=1024
).invoke()

Diffusers example:

from diffusers import StableDiffusionPipeline
import torch

pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16)
image = pipe("A beautiful sunset over the ocean").images[0]

Both repositories offer powerful tools for working with diffusion models, but they cater to slightly different use cases. InvokeAI provides a more user-friendly experience with its focus on image generation, while Diffusers offers a broader range of models and applications at the cost of a steeper learning curve.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

project hero

Invoke - Professional Creative AI Tools for Visual Media

To learn more about Invoke, or implement our Business solutions, visit invoke.com

discord badge latest release badge github stars badge github forks badge CI checks on main badge latest commit to main badge github open issues badge github open prs badge translation status badge

Invoke is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. Invoke offers an industry leading web-based UI, and serves as the foundation for multiple commercial products.

Invoke is available in two editions:

Community EditionProfessional Edition
For users looking for a locally installed, self-hosted and self-managed serviceFor users or teams looking for a cloud-hosted, fully managed service
- Free to use under a commercially-friendly license- Monthly subscription fee with three different plan levels
- Download and install on compatible hardware- Offers additional benefits, including multi-user support, improved model training, and more
- Includes all core studio features: generate, refine, iterate on images, and build workflows- Hosted in the cloud for easy, secure model access and scalability
Quick Start -> Installation and UpdatesMore Information -> www.invoke.com/pricing

Highlighted Features - Canvas and Workflows

Documentation

Quick Links
Installation and Updates - Documentation and Tutorials - Bug Reports - Contributing

Quick Start

  1. Download and unzip the installer from the bottom of the latest release.

  2. Run the installer script.

    • Windows: Double-click on the install.bat script.
    • macOS: Open a Terminal window, drag the file install.sh from Finder into the Terminal, and press enter.
    • Linux: Run install.sh.
  3. When prompted, enter a location for the install and select your GPU type.

  4. Once the install finishes, find the directory you selected during install. The default location is C:\Users\Username\invokeai for Windows or ~/invokeai for Linux/macOS.

  5. Run the launcher script (invoke.bat for Windows, invoke.sh for macOS and Linux) the same way you ran the installer script in step 2.

  6. Select option 1 to start the application. Once it starts up, open your browser and go to http://localhost:9090.

  7. Open the model manager tab to install a starter model and then you'll be ready to generate.

More detail, including hardware requirements and manual install instructions, are available in the installation documentation.

Docker Container

We publish official container images in Github Container Registry: https://github.com/invoke-ai/InvokeAI/pkgs/container/invokeai. Both CUDA and ROCm images are available. Check the above link for relevant tags.

[!IMPORTANT] Ensure that Docker is set up to use the GPU. Refer to NVIDIA or AMD documentation.

Generate!

Run the container, modifying the command as necessary:

docker run --runtime=nvidia --gpus=all --publish 9090:9090 ghcr.io/invoke-ai/invokeai

Then open http://localhost:9090 and install some models using the Model Manager tab to begin generating.

For ROCm, add --device /dev/kfd --device /dev/dri to the docker run command.

Persist your data

You will likely want to persist your workspace outside of the container. Use the --volume /home/myuser/invokeai:/invokeai flag to mount some local directory (using its absolute path) to the /invokeai path inside the container. Your generated images and models will reside there. You can use this directory with other InvokeAI installations, or switch between runtime directories as needed.

DIY

Build your own image and customize the environment to match your needs using our docker-compose stack. See README.md in the docker directory.

Troubleshooting, FAQ and Support

Please review our FAQ for solutions to common installation problems and other issues.

For more help, please join our Discord.

Features

Full details on features can be found in our documentation.

Web Server & UI

Invoke runs a locally hosted web server & React UI with an industry-leading user experience.

Unified Canvas

The Unified Canvas is a fully integrated canvas implementation with support for all core generation capabilities, in/out-painting, brush tools, and more. This creative tool unlocks the capability for artists to create with AI as a creative collaborator, and can be used to augment AI-generated imagery, sketches, photography, renders, and more.

Workflows & Nodes

Invoke offers a fully featured workflow management solution, enabling users to combine the power of node-based workflows with the easy of a UI. This allows for customizable generation pipelines to be developed and shared by users looking to create specific workflows to support their production use-cases.

Board & Gallery Management

Invoke features an organized gallery system for easily storing, accessing, and remixing your content in the Invoke workspace. Images can be dragged/dropped onto any Image-base UI element in the application, and rich metadata within the Image allows for easy recall of key prompts or settings used in your workflow.

Other features

  • Support for both ckpt and diffusers models
  • SD1.5, SD2.0, and SDXL support
  • Upscaling Tools
  • Embedding Manager & Support
  • Model Manager & Support
  • Workflow creation & management
  • Node-Based Architecture

Contributing

Anyone who wishes to contribute to this project - whether documentation, features, bug fixes, code cleanup, testing, or code reviews - is very much encouraged to do so.

Get started with contributing by reading our contribution documentation, joining the #dev-chat or the GitHub discussion board.

We hope you enjoy using Invoke as much as we enjoy creating it, and we hope you will elect to become part of our community.

Thanks

Invoke is a combined effort of passionate and talented people from across the world. We thank them for their time, hard work and effort.

Original portions of the software are Copyright © 2024 by respective contributors.