Convert Figma logo to code with AI

isl-org logoZoeDepth

Metric depth estimation from a single image

2,251
207
2,251
92

Top Related Projects

6,211

PyTorch code for Vision Transformers training with the Self-Supervised learning method DINO

4,383

Code for robust monocular depth estimation described in "Ranftl et. al., Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer, TPAMI 2022"

This repo contains the projects: 'Virtual Normal', 'DiverseDepth', and '3D Scene Shape'. They aim to solve the monocular depth estimation, 3D scene reconstruction from single image problems.

[ICCV 2019] Monocular depth estimation from a single image

Quick Overview

ZoeDepth is an open-source project for monocular depth estimation, providing state-of-the-art performance on various benchmarks. It offers a lightweight and efficient model for predicting depth from a single RGB image, making it suitable for a wide range of applications in computer vision and robotics.

Pros

  • High accuracy and performance on multiple depth estimation benchmarks
  • Lightweight model architecture, suitable for real-time applications
  • Easy-to-use API for both inference and training
  • Supports multiple backbones and model variants

Cons

  • Limited documentation for advanced usage and customization
  • Dependency on specific versions of PyTorch and other libraries
  • May require significant computational resources for training custom models
  • Limited support for non-standard image formats or resolutions

Code Examples

  1. Loading a pre-trained model and performing inference:
from zoedepth.models.builder import build_model
from zoedepth.utils.config import get_config

conf = get_config("zoedepth", "infer")
model = build_model(conf)
depth_map = model.infer_pil(image_pil)
  1. Visualizing the depth map:
from zoedepth.utils.misc import colorize

colored_depth = colorize(depth_map)
colored_depth.save("depth_map.png")
  1. Training a custom model:
from zoedepth.trainers.trainer import ZoeTrainer

trainer = ZoeTrainer(conf)
trainer.fit()

Getting Started

To get started with ZoeDepth, follow these steps:

  1. Clone the repository:

    git clone https://github.com/isl-org/ZoeDepth.git
    cd ZoeDepth
    
  2. Install dependencies:

    pip install -r requirements.txt
    
  3. Download pre-trained weights:

    wget https://github.com/isl-org/ZoeDepth/releases/download/v1.0/ZoeD_M12_N.pt
    
  4. Run inference on an image:

    from zoedepth.models.builder import build_model
    from zoedepth.utils.config import get_config
    from PIL import Image
    
    conf = get_config("zoedepth", "infer")
    model = build_model(conf)
    image = Image.open("path/to/your/image.jpg")
    depth_map = model.infer_pil(image)
    

Competitor Comparisons

6,211

PyTorch code for Vision Transformers training with the Self-Supervised learning method DINO

Pros of DINO

  • Focuses on self-supervised learning for computer vision tasks
  • Provides a more general-purpose vision model for various applications
  • Offers pre-trained models and extensive documentation

Cons of DINO

  • Requires more computational resources for training and inference
  • May have a steeper learning curve for implementation
  • Less specialized for depth estimation tasks

Code Comparison

ZoeDepth:

from zoedepth.models.builder import build_model
from zoedepth.utils.config import get_config

conf = get_config("zoedepth_nk", "infer")
model = build_model(conf)
depth = model.infer_pil(img)

DINO:

import torch
import torchvision.transforms as transforms
from dino import utils, vision_transformer as vits

model = vits.__dict__["vit_small"](patch_size=16, num_classes=0)
utils.load_pretrained_weights(model, "path/to/checkpoint.pth", "teacher")
img = transforms.ToTensor()(image)
feat = model(img.unsqueeze(0))

Both repositories offer unique approaches to computer vision tasks. ZoeDepth specializes in depth estimation, while DINO provides a more versatile self-supervised learning framework for various vision applications. The choice between them depends on the specific requirements of your project and the level of specialization needed.

4,383

Code for robust monocular depth estimation described in "Ranftl et. al., Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer, TPAMI 2022"

Pros of MiDaS

  • More established and widely used in the research community
  • Supports a broader range of input resolutions
  • Offers pre-trained models for various architectures (e.g., ResNet, EfficientNet)

Cons of MiDaS

  • Generally slower inference time compared to ZoeDepth
  • Requires more computational resources for training and inference
  • Less focus on real-time applications

Code Comparison

MiDaS:

import torch
from midas.model_loader import load_model

model_type = "DPT_Large"
model, transform, net_w, net_h = load_model(model_type, optimize=True)

prediction = model(input_image)

ZoeDepth:

from zoedepth.models.builder import build_model
from zoedepth.utils.config import get_config

conf = get_config("zoedepth_nk", "infer")
model = build_model(conf)

depth = model.infer_pil(image)

Both repositories focus on monocular depth estimation, but ZoeDepth is designed for faster inference and real-time applications. MiDaS offers more flexibility in terms of model architectures and input resolutions, making it suitable for a wider range of research scenarios. ZoeDepth's code is more streamlined for quick deployment, while MiDaS provides more options for customization and fine-tuning.

This repo contains the projects: 'Virtual Normal', 'DiverseDepth', and '3D Scene Shape'. They aim to solve the monocular depth estimation, 3D scene reconstruction from single image problems.

Pros of AdelaiDepth

  • Offers a wider range of depth estimation models and techniques
  • Provides more comprehensive documentation and examples
  • Includes pre-trained models for various datasets and scenarios

Cons of AdelaiDepth

  • Less frequent updates and maintenance compared to ZoeDepth
  • More complex setup and usage due to multiple submodules
  • Larger repository size, which may impact download and setup time

Code Comparison

ZoeDepth:

from zoedepth.models.zoedepth import ZoeDepth
model = ZoeDepth.build(model_type="base")
depth_map = model.infer_pil(img)

AdelaiDepth:

from lib.multi_depth_model_woauxi import RelDepthModel
model = RelDepthModel(backbone='resnext101')
depth_map = model.inference(img)

Both repositories focus on depth estimation, but ZoeDepth offers a more streamlined and user-friendly approach, while AdelaiDepth provides a broader range of models and techniques at the cost of increased complexity. ZoeDepth's code is generally more concise and easier to use, whereas AdelaiDepth offers more flexibility and options for advanced users.

[ICCV 2019] Monocular depth estimation from a single image

Pros of monodepth2

  • Established and well-documented project with extensive research backing
  • Supports both monocular and stereo depth estimation
  • Includes pre-trained models for various datasets

Cons of monodepth2

  • Less recent development and updates compared to ZoeDepth
  • May not perform as well on diverse real-world scenes
  • Requires more manual configuration for different use cases

Code Comparison

ZoeDepth:

from zoedepth.models.builder import build_model
from zoedepth.utils.config import get_config

conf = get_config("zoedepth_nk", "infer")
model = build_model(conf)
depth_map = model.infer_pil(image)

monodepth2:

from monodepth2 import networks
from monodepth2.utils import download_model_if_doesnt_exist

encoder = networks.ResnetEncoder(18, False)
depth_decoder = networks.DepthDecoder(num_ch_enc=encoder.num_ch_enc, scales=range(4))
loaded_dict = torch.load("model_path.pth")
depth_map = depth_decoder(encoder(input_image))

Both projects aim to estimate depth from images, but ZoeDepth offers a more streamlined API and recent advancements in depth estimation techniques. monodepth2 provides a solid foundation with extensive research and flexibility for different depth estimation scenarios.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

ZoeDepth: Combining relative and metric depth (Official implementation)

Open In Collab Open in Spaces

License: MIT PyTorch PWC

ZoeDepth: Zero-shot Transfer by Combining Relative and Metric Depth

Shariq Farooq Bhat, Reiner Birkl, Diana Wofk, Peter Wonka, Matthias Müller

[Paper]

teaser

Table of Contents

Usage

It is recommended to fetch the latest MiDaS repo via torch hub before proceeding:

import torch

torch.hub.help("intel-isl/MiDaS", "DPT_BEiT_L_384", force_reload=True)  # Triggers fresh download of MiDaS repo

ZoeDepth models

Using torch hub

import torch

repo = "isl-org/ZoeDepth"
# Zoe_N
model_zoe_n = torch.hub.load(repo, "ZoeD_N", pretrained=True)

# Zoe_K
model_zoe_k = torch.hub.load(repo, "ZoeD_K", pretrained=True)

# Zoe_NK
model_zoe_nk = torch.hub.load(repo, "ZoeD_NK", pretrained=True)

Using local copy

Clone this repo:

git clone https://github.com/isl-org/ZoeDepth.git && cd ZoeDepth

Using local torch hub

You can use local source for torch hub to load the ZoeDepth models, for example:

import torch

# Zoe_N
model_zoe_n = torch.hub.load(".", "ZoeD_N", source="local", pretrained=True)

or load the models manually

from zoedepth.models.builder import build_model
from zoedepth.utils.config import get_config

# ZoeD_N
conf = get_config("zoedepth", "infer")
model_zoe_n = build_model(conf)

# ZoeD_K
conf = get_config("zoedepth", "infer", config_version="kitti")
model_zoe_k = build_model(conf)

# ZoeD_NK
conf = get_config("zoedepth_nk", "infer")
model_zoe_nk = build_model(conf)

Using ZoeD models to predict depth

##### sample prediction
DEVICE = "cuda" if torch.cuda.is_available() else "cpu"
zoe = model_zoe_n.to(DEVICE)


# Local file
from PIL import Image
image = Image.open("/path/to/image.jpg").convert("RGB")  # load
depth_numpy = zoe.infer_pil(image)  # as numpy

depth_pil = zoe.infer_pil(image, output_type="pil")  # as 16-bit PIL Image

depth_tensor = zoe.infer_pil(image, output_type="tensor")  # as torch tensor



# Tensor 
from zoedepth.utils.misc import pil_to_batched_tensor
X = pil_to_batched_tensor(image).to(DEVICE)
depth_tensor = zoe.infer(X)



# From URL
from zoedepth.utils.misc import get_image_from_url

# Example URL
URL = "https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcS4W8H_Nxk_rs3Vje_zj6mglPOH7bnPhQitBH8WkqjlqQVotdtDEG37BsnGofME3_u6lDk&usqp=CAU"


image = get_image_from_url(URL)  # fetch
depth = zoe.infer_pil(image)

# Save raw
from zoedepth.utils.misc import save_raw_16bit
fpath = "/path/to/output.png"
save_raw_16bit(depth, fpath)

# Colorize output
from zoedepth.utils.misc import colorize

colored = colorize(depth)

# save colored output
fpath_colored = "/path/to/output_colored.png"
Image.fromarray(colored).save(fpath_colored)

Environment setup

The project depends on :

  • pytorch (Main framework)
  • timm (Backbone helper for MiDaS)
  • pillow, matplotlib, scipy, h5py, opencv (utilities)

Install environment using environment.yml :

Using mamba (fastest):

mamba env create -n zoe --file environment.yml
mamba activate zoe

Using conda :

conda env create -n zoe --file environment.yml
conda activate zoe

Sanity checks (Recommended)

Check if models can be loaded:

python sanity_hub.py

Try a demo prediction pipeline:

python sanity.py

This will save a file pred.png in the root folder, showing RGB and corresponding predicted depth side-by-side.

Model files

Models are defined under models/ folder, with models/<model_name>_<version>.py containing model definitions and models/config_<model_name>.json containing configuration.

Single metric head models (Zoe_N and Zoe_K from the paper) have the common definition and are defined under models/zoedepth while as the multi-headed model (Zoe_NK) is defined under models/zoedepth_nk.

Evaluation

Download the required dataset and change the DATASETS_CONFIG dictionary in utils/config.py accordingly.

Evaluating offical models

On NYU-Depth-v2 for example:

For ZoeD_N:

python evaluate.py -m zoedepth -d nyu

For ZoeD_NK:

python evaluate.py -m zoedepth_nk -d nyu

Evaluating local checkpoint

python evaluate.py -m zoedepth --pretrained_resource="local::/path/to/local/ckpt.pt" -d nyu

Pretrained resources are prefixed with url:: to indicate weights should be fetched from a url, or local:: to indicate path is a local file. Refer to models/model_io.py for details.

The dataset name should match the corresponding key in utils.config.DATASETS_CONFIG .

Training

Download training datasets as per instructions given here. Then for training a single head model on NYU-Depth-v2 :

python train_mono.py -m zoedepth --pretrained_resource=""

For training the Zoe-NK model:

python train_mix.py -m zoedepth_nk --pretrained_resource=""

Gradio demo

We provide a UI demo built using gradio. To get started, install UI requirements:

pip install -r ui/ui_requirements.txt

Then launch the gradio UI:

python -m ui.app

The UI is also hosted on HuggingFace🤗 here

Citation

@misc{https://doi.org/10.48550/arxiv.2302.12288,
  doi = {10.48550/ARXIV.2302.12288},
  
  url = {https://arxiv.org/abs/2302.12288},
  
  author = {Bhat, Shariq Farooq and Birkl, Reiner and Wofk, Diana and Wonka, Peter and Müller, Matthias},
  
  keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences},
  
  title = {ZoeDepth: Zero-shot Transfer by Combining Relative and Metric Depth},
  
  publisher = {arXiv},
  
  year = {2023},
  
  copyright = {arXiv.org perpetual, non-exclusive license}
}