image-background-remove-tool
✂️ Automated high-quality background removal framework for an image using neural networks. ✂️
Top Related Projects
Rembg is a tool to remove images background
Background Remover lets you Remove Background from images and video using AI with a simple command line interface that is free and open source.
Quick Overview
The image-background-remove-tool is an open-source project that provides a simple and efficient way to remove backgrounds from images. It utilizes various AI segmentation models to achieve high-quality results and offers both command-line and graphical user interfaces for ease of use.
Pros
- Supports multiple AI segmentation models for flexibility and improved results
- Offers both CLI and GUI interfaces, catering to different user preferences
- Provides batch processing capabilities for handling multiple images efficiently
- Actively maintained with regular updates and improvements
Cons
- Requires installation of dependencies, which may be challenging for some users
- Performance can vary depending on the chosen model and image complexity
- Limited customization options for advanced users
- May require significant computational resources for processing large images or batches
Code Examples
- Basic usage with default settings:
from carvekit.api.high import HiInterface
interface = HiInterface(object_type="object")
images = interface.process_images(["path/to/image1.jpg", "path/to/image2.png"])
interface.save_results(images, "output_directory")
- Customizing the segmentation model:
from carvekit.api.high import HiInterface
interface = HiInterface(
object_type="object",
segmentation_network="u2net",
preprocessing_method="none",
postprocessing_method="fba",
trimap_adaptation=True
)
images = interface.process_images(["path/to/image.jpg"])
interface.save_results(images, "output_directory")
- Batch processing with progress tracking:
from carvekit.api.high import HiInterface
from tqdm import tqdm
interface = HiInterface(object_type="object")
image_paths = ["path/to/image1.jpg", "path/to/image2.png", "path/to/image3.jpg"]
for image_path in tqdm(image_paths, desc="Processing images"):
images = interface.process_images([image_path])
interface.save_results(images, "output_directory")
Getting Started
- Install the package:
pip install carvekit
- Basic usage:
from carvekit.api.high import HiInterface
interface = HiInterface(object_type="object")
images = interface.process_images(["path/to/image.jpg"])
interface.save_results(images, "output_directory")
This will remove the background from the specified image and save the result in the output directory.
Competitor Comparisons
Rembg is a tool to remove images background
Pros of rembg
- Simpler installation process with fewer dependencies
- Supports both CLI and API usage, offering more flexibility
- Faster processing times for most images
Cons of rembg
- Limited model options compared to image-background-remove-tool
- Less customization for advanced users
- May struggle with complex backgrounds in some cases
Code Comparison
rembg:
from rembg import remove
from PIL import Image
input_path = 'input.png'
output_path = 'output.png'
input = Image.open(input_path)
output = remove(input)
output.save(output_path)
image-background-remove-tool:
from removebg import RemoveBg
from PIL import Image
rmbg = RemoveBg("YOUR-API-KEY", "error.log")
input_path = "input.png"
output_path = "output.png"
rmbg.remove_background_from_img_file(input_path)
Both tools offer straightforward usage for background removal, but rembg's implementation is more concise and doesn't require an API key. image-background-remove-tool provides more options for customization and model selection, which may be beneficial for specific use cases or advanced users.
Background Remover lets you Remove Background from images and video using AI with a simple command line interface that is free and open source.
Pros of backgroundremover
- Supports video background removal in addition to images
- Includes a command-line interface for easy integration
- Offers pre-trained models for quick setup and usage
Cons of backgroundremover
- Less flexibility in model selection compared to image-background-remove-tool
- May have lower accuracy for certain image types or complex backgrounds
- Limited customization options for fine-tuning the removal process
Code Comparison
image-background-remove-tool:
from carvekit.api.high import HiInterface
interface = HiInterface(object_type="object",
batch_size_seg=5,
batch_size_matting=1,
device='cuda')
img = interface.process_image(image_path="path/to/image.jpg")
img.save("output.png")
backgroundremover:
from backgroundremover import remove
input_path = 'path/to/image.jpg'
output_path = 'output.png'
remove(input_path, output_path)
The code comparison shows that backgroundremover offers a simpler API with fewer configuration options, while image-background-remove-tool provides more control over the removal process and device selection.
Both projects aim to simplify background removal tasks, but they cater to different use cases. image-background-remove-tool focuses on providing a more customizable solution for image processing, while backgroundremover offers a straightforward approach with added support for video processing.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
âï¸ CarveKit âï¸
The higher resolution images from the picture above can be seen in the docs/imgs/compare/ and docs/imgs/input folders.
ð README Language
ð Description:
Automated high-quality background removal framework for an image using neural networks.
ð Features:
- High Quality
- Batch Processing
- NVIDIA CUDA and CPU processing
- FP16 inference: Fast inference with low memory usage
- Easy inference
- 100% remove.bg compatible FastAPI HTTP API
- Removes background from hairs
- Easy integration with your code
â± Try yourself on Google Colab
âï¸ How does it work?
It can be briefly described as
- The user selects a picture or a folder with pictures for processing
- The photo is preprocessed to ensure the best quality of the output image
- Using machine learning technology, the background of the image is removed
- Image post-processing to improve the quality of the processed image
ð Implemented Neural Networks:
Networks | Target | Accuracy |
---|---|---|
Tracer-B7 (default) | General (objects, animals, etc) | 90% (mean F1-Score, DUTS-TE) |
U^2-net | Hairs (hairs, people, animals, objects) | 80.4% (mean F1-Score, DUTS-TE) |
BASNet | General (people, objects) | 80.3% (mean F1-Score, DUTS-TE) |
DeepLabV3 | People, Animals, Cars, etc | 67.4% (mean IoU, COCO val2017) |
Recommended parameters for different models
Networks | Segmentation mask size | Trimap parameters (dilation, erosion) |
---|---|---|
tracer_b7 | 640 | (30, 5) |
u2net | 320 | (30, 5) |
basnet | 320 | (30, 5) |
deeplabv3 | 1024 | (40, 20) |
Notes:
- The final quality may depend on the resolution of your image, the type of scene or object.
- Use U2-Net for hairs and Tracer-B7 for general images and correct parameters.
It is very important for final quality! Example images was taken by using U2-Net and FBA post-processing.
ð¼ï¸ Image pre-processing and post-processing methods:
ð Preprocessing methods:
none
- No preprocessing methods used.
They will be added in the future.
â Post-processing methods:
none
- No post-processing methods used.fba
(default) - This algorithm improves the borders of the image when removing the background from images with hair, etc. using FBA Matting neural network. This method gives the best result in combination with u2net without any preprocessing methods.
ð· Setup for CPU processing:
pip install carvekit --extra-index-url https://download.pytorch.org/whl/cpu
The project has been tested on Python versions ranging from 3.9 to 3.11.7.
ð· Setup for GPU processing:
- Make sure you have an NVIDIA GPU with 8 GB VRAM.
- Install
CUDA Toolkit 12.1 and Video Driver for your GPU
pip install carvekit --extra-index-url https://download.pytorch.org/whl/cu121
The project has been tested on Python versions ranging from 3.9 to 3.11.7.
𧰠Interact via code:
If you don't need deep configuration or don't want to deal with it
import torch
from carvekit.api.high import HiInterface
# Check doc strings for more information
interface = HiInterface(object_type="hairs-like", # Can be "object" or "hairs-like".
batch_size_seg=5,
batch_size_matting=1,
device='cuda' if torch.cuda.is_available() else 'cpu',
seg_mask_size=640, # Use 640 for Tracer B7 and 320 for U2Net
matting_mask_size=2048,
trimap_prob_threshold=231,
trimap_dilation=30,
trimap_erosion_iters=5,
fp16=False)
images_without_background = interface(['./tests/data/cat.jpg'])
cat_wo_bg = images_without_background[0]
cat_wo_bg.save('2.png')
If you want control everything
import PIL.Image
from carvekit.api.interface import Interface
from carvekit.ml.wrap.fba_matting import FBAMatting
from carvekit.ml.wrap.tracer_b7 import TracerUniversalB7
from carvekit.pipelines.postprocessing import MattingMethod
from carvekit.pipelines.preprocessing import PreprocessingStub
from carvekit.trimap.generator import TrimapGenerator
# Check doc strings for more information
seg_net = TracerUniversalB7(device='cpu',
batch_size=1)
fba = FBAMatting(device='cpu',
input_tensor_size=2048,
batch_size=1)
trimap = TrimapGenerator()
preprocessing = PreprocessingStub()
postprocessing = MattingMethod(matting_module=fba,
trimap_generator=trimap,
device='cpu')
interface = Interface(pre_pipe=preprocessing,
post_pipe=postprocessing,
seg_pipe=seg_net)
image = PIL.Image.open('tests/data/cat.jpg')
cat_wo_bg = interface([image])[0]
cat_wo_bg.save('2.png')
𧰠Running the CLI interface:
python3 -m carvekit -i <input_path> -o <output_path> --device <device>
Explanation of args:
Usage: carvekit [OPTIONS]
Performs background removal on specified photos using console interface.
Options:
-i ./2.jpg Path to input file or dir [required]
-o ./2.png Path to output file or dir
--pre none Preprocessing method
--post fba Postprocessing method.
--net tracer_b7 Segmentation Network. Check README for more info.
--recursive Enables recursive search for images in a folder
--batch_size 10 Batch Size for list of images to be loaded to
RAM
--batch_size_seg 5 Batch size for list of images to be processed
by segmentation network
--batch_size_mat 1 Batch size for list of images to be processed
by matting network
--seg_mask_size 640 The size of the input image for the
segmentation neural network. Use 640 for Tracer B7 and 320 for U2Net
--matting_mask_size 2048 The size of the input image for the matting
neural network.
--trimap_dilation 30 The size of the offset radius from the
object mask in pixels when forming an
unknown area
--trimap_erosion 5 The number of iterations of erosion that the
object's mask will be subjected to before
forming an unknown area
--trimap_prob_threshold 231
Probability threshold at which the
prob_filter and prob_as_unknown_area
operations will be applied
--device cpu Processing Device.
--fp16 Enables mixed precision processing. Use only with CUDA. CPU support is experimental!
--help Show this message and exit.
ð¦ Running the Framework / FastAPI HTTP API server via Docker:
Using the API via docker is a fast and non-complex way to have a working API.
Our docker images are available on Docker Hub.
Version tags are the same as the releases of the project with suffixes-cpu
and-cuda
for CPU and CUDA versions respectively.
Important Notes:
Docker image has default front-end at
/
url and FastAPI backend with docs at/docs
url.Authentication is enabled by default.
Token keys are reset on every container restart if ENV variables are not set.
Seedocker-compose.<device>.yml
for more information.
You can see your access keys in the docker container logs.There are examples of interaction with the API.
Seedocs/code_examples/python
for more details
ð¨ Creating and running a container:
- Install
docker-compose
- Run
docker-compose -f docker-compose.cpu.yml up -d
# For CPU Processing - Run
docker-compose -f docker-compose.cuda.yml up -d
# For GPU Processing
Also you can mount folders from your host machine to docker container and use the CLI interface inside the docker container to process files in this folder.
Building a docker image on Windows is not officially supported. You can try using WSL2 or "Linux Containers Mode" but I haven't tested this.
âï¸ Testing
âï¸ Testing with local environment
pip install -r requirements_test.txt
pytest
âï¸ Testing with Docker
- Run
docker-compose -f docker-compose.cpu.yml run carvekit_api pytest
# For testing on CPU - Run
docker-compose -f docker-compose.cuda.yml run carvekit_api pytest
# For testing on GPU
ðª Credits: More info
ðµ Support
You can thank me for developing this project and buy me a small cup of coffee â
Blockchain | Cryptocurrency | Network | Wallet |
---|---|---|---|
Ethereum | ETH / USDT / USDC / BNB / Dogecoin | Mainnet | 0x7Ab1B8015020242D2a9bC48F09b2F34b994bc2F8 |
Ethereum | ETH / USDT / USDC / BNB / Dogecoin | BSC (Binance Smart Chain) | 0x7Ab1B8015020242D2a9bC48F09b2F34b994bc2F8 |
Bitcoin | BTC | - | bc1qmf4qedujhhvcsg8kxpg5zzc2s3jvqssmu7mmhq |
ZCash | ZEC | - | t1d7b9WxdboGFrcVVHG2ZuwWBgWEKhNUbtm |
Tron | TRX | - | TH12CADSqSTcNZPvG77GVmYKAe4nrrJB5X |
Monero | XMR | Mainnet | 48w2pDYgPtPenwqgnNneEUC9Qt1EE6eD5MucLvU3FGpY3SABudDa4ce5bT1t32oBwchysRCUimCkZVsD1HQRBbxVLF9GTh3 |
TON | TON | - | EQCznqTdfOKI3L06QX-3Q802tBL0ecSWIKfkSjU-qsoy0CWE |
ð§ Feedback
I will be glad to receive feedback on the project and suggestions for integration.
For all questions write: farvard34@gmail.com
Top Related Projects
Rembg is a tool to remove images background
Background Remover lets you Remove Background from images and video using AI with a simple command line interface that is free and open source.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot