Convert Figma logo to code with AI

SysCV logosam-hq

Segment Anything in High Quality [NeurIPS 2023]

3,631
218
3,631
95

Top Related Projects

The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.

Grounded SAM: Marrying Grounding DINO with Segment Anything & Stable Diffusion & Recognize Anything - Automatically Detect , Segment and Generate Anything

This is the official code for MobileSAM project that makes SAM lightweight for mobile applications and beyond!

EfficientSAM: Leveraged Masked Image Pretraining for Efficient Segment Anything

Track-Anything is a flexible and interactive tool for video object tracking and segmentation, based on Segment Anything, XMem, and E2FGVI.

Quick Overview

SAM-HQ is an improved version of the Segment Anything Model (SAM), offering higher quality segmentation results. It enhances the original SAM by incorporating a high-quality (HQ) decoder, resulting in more accurate and detailed segmentation outputs, particularly for complex objects and fine structures.

Pros

  • Improved segmentation quality, especially for intricate objects and fine details
  • Compatible with existing SAM prompts and workflows
  • Maintains efficiency while delivering higher quality results
  • Applicable to various computer vision tasks

Cons

  • May require more computational resources compared to the original SAM
  • Limited documentation and examples for advanced use cases
  • Potential learning curve for users unfamiliar with the original SAM
  • Might be overkill for simple segmentation tasks

Code Examples

  1. Loading the SAM-HQ model:
from samhq import sam_model_registry, SamPredictor

model_type = "vit_h"
checkpoint = "sam_hq_vit_h.pth"
device = "cuda"

sam = sam_model_registry[model_type](checkpoint=checkpoint)
sam.to(device=device)

predictor = SamPredictor(sam)
  1. Generating masks from an image:
import numpy as np
from PIL import Image

image = np.array(Image.open("example_image.jpg"))
predictor.set_image(image)

input_point = np.array([[500, 375]])
input_label = np.array([1])

masks, _, _ = predictor.predict(
    point_coords=input_point,
    point_labels=input_label,
    multimask_output=True,
)
  1. Visualizing the segmentation results:
import matplotlib.pyplot as plt

plt.figure(figsize=(10, 10))
plt.imshow(image)
for i, mask in enumerate(masks):
    plt.imshow(mask, alpha=0.5, cmap="jet")
plt.axis('off')
plt.show()

Getting Started

To get started with SAM-HQ:

  1. Clone the repository:

    git clone https://github.com/SysCV/sam-hq.git
    cd sam-hq
    
  2. Install dependencies:

    pip install -r requirements.txt
    
  3. Download the pre-trained model:

    wget https://huggingface.co/lkeab/hq-sam/resolve/main/sam_hq_vit_h.pth
    
  4. Use the code examples provided above to load the model and perform segmentation on your images.

Competitor Comparisons

The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.

Pros of segment-anything

  • Larger community and more extensive documentation
  • Broader range of applications and use cases
  • More frequent updates and maintenance

Cons of segment-anything

  • Higher computational requirements
  • Potentially slower inference time for certain tasks
  • May be overkill for simpler segmentation tasks

Code Comparison

segment-anything:

from segment_anything import sam_model_registry, SamPredictor

sam = sam_model_registry["default"](checkpoint="sam_vit_h_4b8939.pth")
predictor = SamPredictor(sam)
predictor.set_image(image)
masks, _, _ = predictor.predict(point_coords=input_point, point_labels=input_label)

sam-hq:

from sam_hq import sam_model_registry, SamPredictor

sam = sam_model_registry["vit_h"](checkpoint="sam_hq_vit_h.pth")
predictor = SamPredictor(sam)
predictor.set_image(image)
masks, _, _ = predictor.predict(point_coords=input_point, point_labels=input_label)

The code usage is very similar between the two repositories, with the main difference being the import statement and the model registry key. sam-hq focuses on high-quality segmentation, while segment-anything offers a more versatile approach for various segmentation tasks.

Grounded SAM: Marrying Grounding DINO with Segment Anything & Stable Diffusion & Recognize Anything - Automatically Detect , Segment and Generate Anything

Pros of Grounded-Segment-Anything

  • Integrates grounding DINO for object detection, enhancing segmentation accuracy
  • Supports text-to-mask generation, allowing for more intuitive user interactions
  • Offers a wider range of applications, including visual question answering

Cons of Grounded-Segment-Anything

  • May have higher computational requirements due to additional components
  • Potentially more complex to set up and use compared to SAM-HQ
  • Could have slower inference times due to the integration of multiple models

Code Comparison

SAM-HQ:

from samhq import SAM
model = SAM()
masks = model.generate(image)

Grounded-Segment-Anything:

from groundedsam import GroundedSAM
model = GroundedSAM()
masks = model.generate(image, text_prompt="a cat")

Both repositories build upon the original Segment Anything Model (SAM), but take different approaches to enhance its capabilities. SAM-HQ focuses on improving the quality of segmentation masks, while Grounded-Segment-Anything adds object detection and text-based interactions. The choice between the two depends on the specific use case and desired features.

This is the official code for MobileSAM project that makes SAM lightweight for mobile applications and beyond!

Pros of MobileSAM

  • Significantly smaller model size, making it more suitable for mobile and edge devices
  • Faster inference speed, allowing for real-time applications
  • Maintains comparable performance to the original SAM model

Cons of MobileSAM

  • Slightly lower accuracy compared to SAM-HQ in some scenarios
  • May not handle complex or highly detailed images as well as SAM-HQ
  • Limited documentation and community support compared to the more established SAM-HQ

Code Comparison

MobileSAM:

from mobile_sam import SamPredictor, SamAutomaticMaskGenerator, sam_model_registry
sam = sam_model_registry["vit_t"](checkpoint="mobile_sam.pt")
mask_generator = SamAutomaticMaskGenerator(sam)

SAM-HQ:

from segment_anything import SamPredictor, SamAutomaticMaskGenerator, sam_model_registry
sam = sam_model_registry["vit_h"](checkpoint="sam_hq_vit_h.pth")
mask_generator = SamAutomaticMaskGenerator(sam)

The main differences in the code are the import statements and the model type used. MobileSAM uses a smaller "vit_t" model, while SAM-HQ uses the larger "vit_h" model. This reflects the core difference between the two projects: MobileSAM's focus on efficiency and SAM-HQ's emphasis on high-quality results.

EfficientSAM: Leveraged Masked Image Pretraining for Efficient Segment Anything

Pros of EfficientSAM

  • Faster inference speed and lower computational requirements
  • More lightweight model architecture, suitable for resource-constrained environments
  • Easier to deploy and integrate into existing systems

Cons of EfficientSAM

  • Potentially lower accuracy compared to SAM-HQ, especially for complex segmentation tasks
  • Less extensive documentation and community support
  • Fewer pre-trained models and variants available

Code Comparison

SAM-HQ:

from sam_hq import SamPredictor, sam_model_registry

sam = sam_model_registry["vit_h"](checkpoint="sam_hq_vit_h.pth")
predictor = SamPredictor(sam)
masks, _, _ = predictor.predict(image, point_coords, point_labels)

EfficientSAM:

from efficient_sam import EfficientSamPredictor, sam_model_registry

sam = sam_model_registry["efficient_sam_s"](checkpoint="efficient_sam_s.pth")
predictor = EfficientSamPredictor(sam)
masks, _, _ = predictor.predict(image, point_coords, point_labels)

Both repositories offer similar APIs for model initialization and prediction, but EfficientSAM uses a more lightweight architecture, potentially sacrificing some accuracy for improved efficiency.

Track-Anything is a flexible and interactive tool for video object tracking and segmentation, based on Segment Anything, XMem, and E2FGVI.

Pros of Track-Anything

  • Focuses on video object tracking and segmentation
  • Provides a user-friendly web interface for interactive tracking
  • Integrates multiple models for comprehensive video analysis

Cons of Track-Anything

  • May have higher computational requirements due to video processing
  • Potentially more complex setup and dependencies
  • Less specialized in high-quality image segmentation

Code Comparison

Track-Anything:

def track_anything(video_path, prompt):
    frames = load_video(video_path)
    tracker = XMem(checkpoint=XMEM_CHECKPOINT)
    results = tracker.track(frames, prompt)
    return results

SAM-HQ:

def segment_high_quality(image, prompt):
    model = sam_hq_model(checkpoint=SAMHQ_CHECKPOINT)
    mask = model.generate(image, prompt)
    return mask

Summary

Track-Anything is geared towards video object tracking and segmentation with a user-friendly interface, while SAM-HQ focuses on high-quality image segmentation. Track-Anything offers more comprehensive video analysis features but may require more computational resources. SAM-HQ provides specialized high-quality image segmentation capabilities with potentially simpler setup and usage.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

Segment Anything in High Quality

PWC Open In Colab Huggingfaces Open in OpenXLab Downloads

Segment Anything in High Quality
NeurIPS 2023
ETH Zurich & HKUST

We propose HQ-SAM to upgrade SAM for high-quality zero-shot segmentation. Refer to our paper for more details.

Updates

:fire::fire: SAM for Video Segmentation: Interested in intersecting SAM and video? HQ-SAM is supported by DEVA in its text-prompted mode! Also, check the work MASA and SAM-PT with SAM.

:fire::fire: SAM in 3D: Interested in intersecting SAM and 3D Gaussian Splatting? See our new work Gaussian Grouping! Also, if you are interested in intersecting SAM and NeRF, please see work SANeRF-HQ!

More: HQ-SAM is adopted in Osprey, CaR, SpatialRGPT to provide fine-grained mask annotations.

2023/11/06: HQ-SAM is adopted to annotate the Grounding-anything Dataset proposed by GLaMM.

2023/10/15: HQ-SAM is supported in the OpenMMLab PlayGround for annotation with Label-Studio.

2023/09/28: HQ-SAM is in ENIGMA-51 for annotating egocentric industrial data, with SAM comparison in paper.

2023/08/16: HQ-SAM is in segment-geospatial for segmenting geospatial data, and mask annotation tool ISAT!

2023/08/11: Support python package for easier pip installation.

2023/07/25: Light HQ-SAM is in EfficientSAM series combining with Grounded SAM!

2023/07/21: HQ-SAM is also in OpenXLab apps, thanks their support!

:rocket::rocket: 2023/07/17: We released Light HQ-SAM using TinyViT as backbone, for both fast and high-quality zero-shot segmentation, which reaches 41.2 FPS. Refer to Light HQ-SAM vs. MobileSAM for more details.

:trophy::1st_place_medal: 2023/07/14: Grounded HQ-SAM obtains the first place:1st_place_medal: in the Segmentation in the Wild competition on zero-shot track (hosted in CVPR 2023 workshop), outperforming Grounded SAM. Refer to our SGinW evaluation for more details.

2023/07/05: We released SAM tuning instuctions and HQSeg-44K data.

2023/07/04: HQ-SAM is adopted in SAM-PT to improve the SAM-based zero-shot video segmentation performance. Also, HQ-SAM is used in Grounded-SAM, Inpaint Anything and HQTrack (2nd in VOTS 2023).

2023/06/28: We released the ONNX export script and colab notebook for exporting and using ONNX model.

2023/06/23: Play with HQ-SAM demo at Huggingfaces, which supports point, box and text prompts.

2023/06/14: We released the colab demo Open In Colab and automatic mask generator notebook.

2023/06/13: We released the model checkpoints and demo visualization codes.

Visual comparison between SAM and HQ-SAM

SAM vs. HQ-SAM

image

Introduction

The recent Segment Anything Model (SAM) represents a big leap in scaling up segmentation models, allowing for powerful zero-shot capabilities and flexible prompting. Despite being trained with 1.1 billion masks, SAM's mask prediction quality falls short in many cases, particularly when dealing with objects that have intricate structures. We propose HQ-SAM, equipping SAM with the ability to accurately segment any object, while maintaining SAM's original promptable design, efficiency, and zero-shot generalizability. Our careful design reuses and preserves the pre-trained model weights of SAM, while only introducing minimal additional parameters and computation. We design a learnable High-Quality Output Token, which is injected into SAM's mask decoder and is responsible for predicting the high-quality mask. Instead of only applying it on mask-decoder features, we first fuse them with early and final ViT features for improved mask details. To train our introduced learnable parameters, we compose a dataset of 44K fine-grained masks from several sources. HQ-SAM is only trained on the introduced detaset of 44k masks, which takes only 4 hours on 8 GPUs. We show the efficacy of HQ-SAM in a suite of 9 diverse segmentation datasets across different downstream tasks, where 7 out of them are evaluated in a zero-shot transfer protocol.

image

Quantitative comparison between SAM and HQ-SAM

Note: For box-prompting-based evaluation, we feed SAM, MobileSAM and our HQ-SAM with the same image/video bounding boxes and adopt the single mask output mode of SAM.

We provide comprehensive performance, model size and speed comparison on SAM variants: image

Various ViT backbones on COCO:

backbones Note: For the COCO dataset, we use a SOTA detector FocalNet-DINO trained on the COCO dataset as our box prompt generator.

YTVIS and HQ-YTVIS

Note:Using ViT-L backbone. We adopt the SOTA detector Mask2Former trained on the YouTubeVIS 2019 dataset as our video boxes prompt generator while reusing its object association prediction. ytvis

DAVIS

Note: Using ViT-L backbone. We adopt the SOTA model XMem as our video boxes prompt generator while reusing its object association prediction. davis

Quick Installation via pip

pip install segment-anything-hq
python
from segment_anything_hq import sam_model_registry
model_type = "<model_type>" #"vit_l/vit_b/vit_h/vit_tiny"
sam_checkpoint = "<path/to/checkpoint>"
sam = sam_model_registry[model_type](checkpoint=sam_checkpoint)

see specific usage example (such as vit-l) by running belowing command:

export PYTHONPATH=$(pwd)
python demo/demo_hqsam_pip_example.py

Standard Installation

The code requires python>=3.8, as well as pytorch>=1.7 and torchvision>=0.8. Please follow the instructions here to install both PyTorch and TorchVision dependencies. Installing both PyTorch and TorchVision with CUDA support is strongly recommended.

Clone the repository locally and install with

git clone https://github.com/SysCV/sam-hq.git
cd sam-hq; pip install -e .

The following optional dependencies are necessary for mask post-processing, saving masks in COCO format, the example notebooks, and exporting the model in ONNX format. jupyter is also required to run the example notebooks.

pip install opencv-python pycocotools matplotlib onnxruntime onnx timm

Example conda environment setup

conda create --name sam_hq python=3.8 -y
conda activate sam_hq
conda install pytorch==1.10.0 torchvision==0.11.0 cudatoolkit=11.1 -c pytorch -c nvidia
pip install opencv-python pycocotools matplotlib onnxruntime onnx timm

# under your working directory
git clone https://github.com/SysCV/sam-hq.git
cd sam-hq
pip install -e .
export PYTHONPATH=$(pwd)

Model Checkpoints

Three HQ-SAM model versions of the model are available with different backbone sizes. These models can be instantiated by running

from segment_anything import sam_model_registry
sam = sam_model_registry["<model_type>"](checkpoint="<path/to/checkpoint>")

Download the provided trained model below and put them into the pretrained_checkpoint folder:

mkdir pretrained_checkpoint

Click the links below to download the checkpoint for the corresponding model type. We also provide alternative model downloading links here or at hugging face.

Getting Started

First download a model checkpoint. Then the model can be used in just a few lines to get masks from a given prompt:

from segment_anything import SamPredictor, sam_model_registry
sam = sam_model_registry["<model_type>"](checkpoint="<path/to/checkpoint>")
predictor = SamPredictor(sam)
predictor.set_image(<your_image>)
masks, _, _ = predictor.predict(<input_prompts>)

Additionally, see the usage examples in our demo , colab notebook and automatic mask generator notebook.

To obtain HQ-SAM's visual result:

python demo/demo_hqsam.py

To obtain baseline SAM's visual result. Note that you need to download original SAM checkpoint from baseline-SAM-L model and put it into the pretrained_checkpoint folder.

python demo/demo_sam.py

To obtain Light HQ-SAM's visual result:

python demo/demo_hqsam_light.py

HQ-SAM Tuning and HQ-Seg44k Data

We provide detailed training, evaluation, visualization and data downloading instructions in HQ-SAM training. You can also replace our training data to obtain your own SAM in specific application domain (like medical, OCR and remote sensing).

Please change the current folder path to:

cd train

and then refer to detailed readme instruction.

Grounded HQ-SAM vs Grounded SAM on SegInW

Grounded HQ-SAM wins the first place:1st_place_medal: on SegInW benchmark (consist of 25 public zero-shot in the wild segmentation datasets), and outpuerforming Grounded SAM using the same grounding-dino detector.

Model Name Encoder GroundingDINO Mean AP Evaluation Script Log Output Json
Grounded SAM vit-h swin-b 48.7 script log result
Grounded HQ-SAM vit-h swin-b 49.6 script log result

Please change the current folder path to:

cd seginw

We provide detailed evaluation instructions and metrics on SegInW in Grounded-HQ-SAM evaluation.

Light HQ-SAM vs MobileSAM on COCO

We propose Light HQ-SAM based on the tiny vit image encoder provided by MobileSAM. We provide quantitative comparison on zero-shot COCO performance, speed and memory below. Try Light HQ-SAM at here.

Model Encoder AP AP@L AP@M AP@S Model Params (MB) FPS Memory (GB)
MobileSAM TinyViT 44.3 61.8 48.1 28.8 38.6 44.8 3.7
Light HQ-SAM TinyViT 45.0 62.8 48.8 29.2 40.3 41.2 3.7

Note: For the COCO dataset, we use the same SOTA detector FocalNet-DINO trained on the COCO dataset as our and Mobile sam's box prompt generator.

ONNX export

HQ-SAM's lightweight mask decoder can be exported to ONNX format so that it can be run in any environment that supports ONNX runtime. Export the model with

python scripts/export_onnx_model.py --checkpoint <path/to/checkpoint> --model-type <model_type> --output <path/to/output>

See the example notebook for details on how to combine image preprocessing via HQ-SAM's backbone with mask prediction using the ONNX model. It is recommended to use the latest stable version of PyTorch for ONNX export.

Citation

If you find HQ-SAM useful in your research or refer to the provided baseline results, please star :star: this repository and consider citing :pencil::

@inproceedings{sam_hq,
    title={Segment Anything in High Quality},
    author={Ke, Lei and Ye, Mingqiao and Danelljan, Martin and Liu, Yifan and Tai, Yu-Wing and Tang, Chi-Keung and Yu, Fisher},
    booktitle={NeurIPS},
    year={2023}
}  

Related high-quality instance segmentation work:

@inproceedings{transfiner,
    title={Mask Transfiner for High-Quality Instance Segmentation},
    author={Ke, Lei and Danelljan, Martin and Li, Xia and Tai, Yu-Wing and Tang, Chi-Keung and Yu, Fisher},
    booktitle={CVPR},
    year={2022}
}

Acknowledgments