Convert Figma logo to code with AI

Sanster logoIOPaint

Image inpainting tool powered by SOTA AI Model. Remove any unwanted object, defect, people from your pictures or erase and replace(powered by stable diffusion) any thing on your pictures.

18,736
1,914
18,736
153

Top Related Projects

High-Resolution Image Synthesis with Latent Diffusion Models

Stable Diffusion web UI

Let us control diffusion models!

Outpainting with Stable Diffusion on an infinite canvas

Quick Overview

IOPaint is an open-source image inpainting tool powered by Segment Anything, Stable Diffusion, and BLIP. It offers a user-friendly web interface for various image editing tasks, including object removal, image extension, and background generation. The project combines state-of-the-art AI models to provide advanced image manipulation capabilities.

Pros

  • Integrates multiple AI models for comprehensive image editing
  • User-friendly web interface for easy accessibility
  • Supports various inpainting tasks, including object removal and image extension
  • Open-source project with active development and community support

Cons

  • Requires significant computational resources for optimal performance
  • May have limitations in handling complex or highly detailed images
  • Dependency on multiple AI models can lead to increased setup complexity
  • Potential privacy concerns when processing images through AI models

Getting Started

To get started with IOPaint:

  1. Clone the repository:

    git clone https://github.com/Sanster/IOPaint.git
    cd IOPaint
    
  2. Install dependencies:

    pip install -r requirements.txt
    
  3. Download the required model weights:

    python download_models.py
    
  4. Start the web interface:

    python app.py
    
  5. Open a web browser and navigate to http://localhost:8080 to access the IOPaint interface.

Note: Ensure you have Python 3.8+ and CUDA-compatible GPU for optimal performance.

Competitor Comparisons

High-Resolution Image Synthesis with Latent Diffusion Models

Pros of stablediffusion

  • More comprehensive and versatile image generation capabilities
  • Larger community and broader range of applications
  • Advanced features like text-to-image and image-to-image generation

Cons of stablediffusion

  • Higher computational requirements and more complex setup
  • Steeper learning curve for beginners
  • Less focused on specific inpainting tasks compared to IOPaint

Code Comparison

IOPaint (Python):

from iopaint import InpaintModel

model = InpaintModel()
result = model.inpaint(image, mask, prompt)

stablediffusion (Python):

from diffusers import StableDiffusionInpaintPipeline

pipe = StableDiffusionInpaintPipeline.from_pretrained("runwayml/stable-diffusion-inpainting")
image = pipe(prompt=prompt, image=init_image, mask_image=mask_image).images[0]

Both repositories offer image manipulation capabilities, but stablediffusion provides a more comprehensive suite of tools for various image generation tasks. IOPaint focuses specifically on inpainting, making it potentially easier to use for that particular task. The code snippets demonstrate that IOPaint has a simpler API for inpainting, while stablediffusion offers more flexibility and options for different image generation tasks.

Stable Diffusion web UI

Pros of stable-diffusion-webui

  • More comprehensive set of features for image generation and manipulation
  • Larger community and more frequent updates
  • Supports a wider range of models and extensions

Cons of stable-diffusion-webui

  • Steeper learning curve due to its extensive feature set
  • Higher system requirements for optimal performance
  • More complex setup process, especially for beginners

Code Comparison

IOPaint:

def inpaint(self, image, mask, prompt):
    # Simplified inpainting process
    return self.model.inpaint(image, mask, prompt)

stable-diffusion-webui:

def inpaint(self, image, mask, prompt, steps, cfg_scale, denoising_strength):
    # More advanced inpainting with additional parameters
    return self.model.inpaint(image, mask, prompt, steps, cfg_scale, denoising_strength)

The code comparison shows that stable-diffusion-webui offers more granular control over the inpainting process, allowing users to adjust parameters like steps, cfg_scale, and denoising_strength. This reflects the overall trend of stable-diffusion-webui providing more advanced features and customization options compared to IOPaint's simpler approach.

Let us control diffusion models!

Pros of ControlNet

  • More versatile and powerful, supporting a wide range of image manipulation tasks
  • Offers advanced control over image generation through various conditioning methods
  • Provides pre-trained models for different tasks, enhancing ease of use

Cons of ControlNet

  • Requires more computational resources and expertise to set up and use effectively
  • Less user-friendly interface, primarily designed for developers and researchers
  • May have a steeper learning curve for beginners in image manipulation

Code Comparison

ControlNet example:

from share import *
import config

model = create_model('./models/control_sd15_canny.pth')
processor = CannyDetector()

input_image = load_image("input.jpg")
detected_map = processor(input_image)
result = model(input_image, detected_map)

IOPaint example:

from iopaint import InpaintModel

model = InpaintModel(device="cuda")
result = model.inpaint(
    image="input.jpg",
    mask="mask.png",
    prompt="a cat sitting on a couch"
)

While ControlNet offers more advanced control and versatility, IOPaint provides a simpler interface for inpainting tasks. ControlNet's code demonstrates its flexibility in applying different detection methods and models, while IOPaint focuses on a straightforward inpainting process with minimal setup.

Outpainting with Stable Diffusion on an infinite canvas

Pros of stablediffusion-infinity

  • Offers a wider range of AI models and techniques for image generation and manipulation
  • Provides more advanced features for fine-tuning and customizing the AI models
  • Supports batch processing and automation for larger-scale image generation tasks

Cons of stablediffusion-infinity

  • Has a steeper learning curve and may be more challenging for beginners to use
  • Requires more computational resources and may have longer processing times
  • Less focus on user-friendly interface and ease of use compared to IOPaint

Code Comparison

IOPaint:

def inpaint(self, image, mask, prompt, num_samples=1, num_steps=50):
    # Simplified inpainting function
    return self.model.inpaint(image, mask, prompt, num_samples, num_steps)

stablediffusion-infinity:

def generate_image(self, prompt, model_name, guidance_scale=7.5, num_inference_steps=50):
    # More complex image generation with additional parameters
    return self.pipeline(prompt, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0]

The code comparison shows that stablediffusion-infinity offers more customization options and parameters for image generation, while IOPaint focuses on a simpler, more straightforward approach to inpainting tasks.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

IOPaint

A free and open-source inpainting & outpainting tool powered by SOTA AI model.

total download version python version HuggingFace Spaces Open in Colab

Erase(LaMa)Replace Object(PowerPaint)
Draw Text(AnyText)Out-painting(PowerPaint)

Features

Quick Start

Start webui

IOPaint provides a convenient webui for using the latest AI models to edit your images. You can install and start IOPaint easily by running following command:

# In order to use GPU, install cuda version of pytorch first.
# pip3 install torch==2.1.2 torchvision==0.16.2 --index-url https://download.pytorch.org/whl/cu118
# AMD GPU users, please utilize the following command, only works on linux, as pytorch is not yet supported on Windows with ROCm.
# pip3 install torch==2.1.2 torchvision==0.16.2 --index-url https://download.pytorch.org/whl/rocm5.6

pip3 install iopaint
iopaint start --model=lama --device=cpu --port=8080

That's it, you can start using IOPaint by visiting http://localhost:8080 in your web browser.

All models will be downloaded automatically at startup. If you want to change the download directory, you can add --model-dir. More documentation can be found here

You can see other supported models at here and how to use local sd ckpt/safetensors file at here.

Plugins

You can specify which plugins to use when starting the service, and you can view the commands to enable plugins by using iopaint start --help.

More demonstrations of the Plugin can be seen here

iopaint start --enable-interactive-seg --interactive-seg-device=cuda

Batch processing

You can also use IOPaint in the command line to batch process images:

iopaint run --model=lama --device=cpu \
--image=/path/to/image_folder \
--mask=/path/to/mask_folder \
--output=output_dir

--image is the folder containing input images, --mask is the folder containing corresponding mask images. When --mask is a path to a mask file, all images will be processed using this mask.

You can see more information about the available models and plugins supported by IOPaint below.

Development

Install nodejs, then install the frontend dependencies.

git clone https://github.com/Sanster/IOPaint.git
cd IOPaint/web_app
npm install
npm run build
cp -r dist/ ../iopaint/web_app

Create a .env.local file in web_app and fill in the backend IP and port.

VITE_BACKEND=http://127.0.0.1:8080

Start front-end development environment

npm run dev

Install back-end requirements and start backend service

pip install -r requirements.txt
python3 main.py start --model lama --port 8080

Then you can visit http://localhost:5173/ for development. The frontend code will automatically update after being modified, but the backend needs to restart the service after modifying the python code.