Convert Figma logo to code with AI

facebookresearch logoDensePose

A real-time approach for mapping all human pixels of 2D RGB images to a 3D surface-based model of the body

6,946
1,296
6,946
151

Top Related Projects

Detectron2 is a platform for object detection, segmentation and other visual recognition tasks.

31,037

OpenPose: Real-time multi-person keypoint detection library for body, face, hands, and foot estimation

Fast and accurate human pose estimation in PyTorch. Contains implementation of "Real-time 2D Multi-Person Pose Estimation on CPU: Lightweight OpenPose" paper.

The project is an official implement of our ECCV2018 paper "Simple Baselines for Human Pose Estimation and Tracking(https://arxiv.org/abs/1804.06208)"

Pretrained models for TensorFlow.js

Quick Overview

DensePose is a project by Facebook AI Research that aims to map all human pixels of an RGB image to the 3D surface of the human body. It provides dense human pose estimation, allowing for detailed understanding of human body positioning and shape in images and videos.

Pros

  • High-quality, dense human pose estimation
  • Integrates well with other computer vision tasks
  • Supports both images and video input
  • Provides pre-trained models for quick implementation

Cons

  • Computationally intensive, may require powerful hardware
  • Limited to human subjects only
  • Requires careful calibration for optimal results
  • May struggle with occluded or partially visible subjects

Code Examples

  1. Loading a pre-trained DensePose model:
from detectron2.config import get_cfg
from detectron2.engine import DefaultPredictor
from densepose import add_densepose_config
from densepose.vis.extractor import DensePoseResultExtractor

cfg = get_cfg()
add_densepose_config(cfg)
cfg.merge_from_file("path/to/densepose_rcnn_R_50_FPN_s1x.yaml")
cfg.MODEL.WEIGHTS = "path/to/densepose_rcnn_R_50_FPN_s1x.pkl"
predictor = DefaultPredictor(cfg)
  1. Performing inference on an image:
import cv2

image = cv2.imread("path/to/image.jpg")
outputs = predictor(image)
densepose_result = DensePoseResultExtractor()(outputs["instances"])
  1. Visualizing DensePose results:
from densepose.vis.densepose_results import DensePoseResultsFineVisualizer

visualizer = DensePoseResultsFineVisualizer()
image_vis = visualizer.visualize(image, densepose_result)
cv2.imshow("DensePose Result", image_vis)
cv2.waitKey(0)

Getting Started

To get started with DensePose:

  1. Install dependencies:
pip install torch torchvision
pip install 'git+https://github.com/facebookresearch/detectron2.git'
pip install 'git+https://github.com/facebookresearch/DensePose.git'
  1. Download a pre-trained model:
wget https://dl.fbaipublicfiles.com/densepose/densepose_rcnn_R_50_FPN_s1x/165712039/model_final_162be9.pkl
  1. Use the code examples above to load the model, perform inference, and visualize results.

Competitor Comparisons

Detectron2 is a platform for object detection, segmentation and other visual recognition tasks.

Pros of Detectron2

  • More comprehensive and versatile, supporting a wider range of computer vision tasks
  • Better documentation and community support
  • Regularly updated with new features and improvements

Cons of Detectron2

  • Steeper learning curve due to its broader scope
  • May be overkill for projects focused solely on dense pose estimation

Code Comparison

DensePose:

from densepose import add_densepose_config
from densepose.engine.trainer import Trainer
from densepose.modeling.roi_heads.roi_head import ROI_DENSEPOSE_HEAD_REGISTRY

cfg = get_cfg()
add_densepose_config(cfg)

Detectron2:

from detectron2.config import get_cfg
from detectron2.engine import DefaultTrainer
from detectron2.modeling import build_model

cfg = get_cfg()
cfg.merge_from_file("path/to/config.yaml")
model = build_model(cfg)

Both repositories are maintained by Facebook Research and focus on computer vision tasks. DensePose is specifically designed for dense human pose estimation, while Detectron2 is a more general-purpose computer vision library that includes DensePose functionality among many other features.

Detectron2 offers a wider range of applications and is more actively maintained, making it a better choice for projects requiring multiple computer vision tasks. However, for projects solely focused on dense pose estimation, DensePose may be more straightforward to use.

31,037

OpenPose: Real-time multi-person keypoint detection library for body, face, hands, and foot estimation

Pros of OpenPose

  • Supports real-time multi-person keypoint detection
  • Provides 2D and 3D pose estimation capabilities
  • Offers a wide range of pre-trained models for different body parts

Cons of OpenPose

  • Limited to keypoint detection, lacking dense surface mapping
  • May struggle with occlusions and complex poses
  • Requires more computational resources for real-time performance

Code Comparison

OpenPose:

// Configuring OpenPose
op::Wrapper opWrapper{op::ThreadManagerMode::Asynchronous};
opWrapper.configure(opConfig);
opWrapper.start();

// Process image
auto datums = opWrapper.emplaceAndPop(imageToProcess);

DensePose:

# Configuring DensePose
cfg = get_cfg()
cfg.merge_from_file(model_zoo.get_config_file("COCO-DensePose/densepose_rcnn_R_50_FPN_s1x.yaml"))
predictor = DefaultPredictor(cfg)

# Process image
outputs = predictor(image)

Both repositories focus on human pose estimation, but they differ in their approaches and capabilities. OpenPose excels in real-time multi-person keypoint detection, while DensePose provides dense surface mapping for more detailed body analysis. OpenPose offers broader support for different body parts, but DensePose may handle occlusions better due to its dense mapping approach. The code snippets demonstrate the different setup and usage patterns for each library.

Fast and accurate human pose estimation in PyTorch. Contains implementation of "Real-time 2D Multi-Person Pose Estimation on CPU: Lightweight OpenPose" paper.

Pros of lightweight-human-pose-estimation.pytorch

  • Lightweight architecture, suitable for real-time applications
  • Easy to use and deploy, with PyTorch implementation
  • Supports both 2D and 3D pose estimation

Cons of lightweight-human-pose-estimation.pytorch

  • Less accurate for dense surface mapping compared to DensePose
  • Limited to keypoint-based pose estimation
  • Smaller community and fewer resources compared to DensePose

Code Comparison

DensePose:

import detectron2
from detectron2.config import get_cfg
from detectron2.engine import DefaultPredictor

cfg = get_cfg()
cfg.merge_from_file("path/to/densepose_rcnn_R_50_FPN_s1x.yaml")
predictor = DefaultPredictor(cfg)

lightweight-human-pose-estimation.pytorch:

from models.with_mobilenet import PoseEstimationWithMobileNet
from modules.load_state import load_state

net = PoseEstimationWithMobileNet()
checkpoint = torch.load('checkpoint_iter_370000.pth', map_location='cpu')
load_state(net, checkpoint)

Both repositories offer pose estimation capabilities, but DensePose focuses on dense surface mapping and is part of the larger Detectron2 ecosystem, while lightweight-human-pose-estimation.pytorch prioritizes efficiency and ease of use for keypoint-based pose estimation.

The project is an official implement of our ECCV2018 paper "Simple Baselines for Human Pose Estimation and Tracking(https://arxiv.org/abs/1804.06208)"

Pros of human-pose-estimation.pytorch

  • Simpler implementation focused on 2D pose estimation
  • Easier to set up and use for basic human pose tasks
  • More lightweight and potentially faster for 2D applications

Cons of human-pose-estimation.pytorch

  • Limited to 2D pose estimation, lacking the dense 3D surface mapping of DensePose
  • Less comprehensive in terms of full-body analysis and understanding
  • May not be as suitable for complex applications requiring detailed body surface information

Code Comparison

DensePose:

from detectron2.config import get_cfg
from detectron2.engine import DefaultPredictor
from densepose import add_densepose_config
from densepose.vis.extractor import DensePoseResultExtractor

cfg = get_cfg()
add_densepose_config(cfg)
cfg.merge_from_file("path/to/config.yaml")
predictor = DefaultPredictor(cfg)

human-pose-estimation.pytorch:

from models.pose_resnet import get_pose_net
from core.inference import get_final_preds

model = get_pose_net(cfg, is_train=False)
model.load_state_dict(torch.load('path/to/model.pth'))
preds, _ = get_final_preds(cfg, heatmaps, center, scale)

Pretrained models for TensorFlow.js

Pros of tfjs-models

  • Runs in web browsers, enabling client-side ML without server dependencies
  • Supports a wide range of pre-trained models for various tasks
  • Easy integration with web applications using JavaScript

Cons of tfjs-models

  • Generally lower performance compared to native implementations
  • Limited to models that can run efficiently in browsers
  • May have reduced accuracy for complex tasks like dense pose estimation

Code Comparison

DensePose:

import detectron2
from detectron2.config import get_cfg
from detectron2.engine import DefaultPredictor

cfg = get_cfg()
cfg.merge_from_file("densepose_rcnn_R_50_FPN_s1x.yaml")
predictor = DefaultPredictor(cfg)
outputs = predictor(image)

tfjs-models:

import * as poseDetection from '@tensorflow-models/pose-detection';

const detector = await poseDetection.createDetector(poseDetection.SupportedModels.MoveNet);
const poses = await detector.estimatePoses(image);

While both repositories focus on computer vision tasks, DensePose specializes in dense human pose estimation, offering high accuracy but requiring more computational resources. tfjs-models provides a broader range of models optimized for web environments, sacrificing some performance for accessibility and ease of integration in web applications.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

DensePose:

Dense Human Pose Estimation In The Wild

Rıza Alp Güler, Natalia Neverova, Iasonas Kokkinos

[densepose.org] [arXiv] [BibTeX]

Dense human pose estimation aims at mapping all human pixels of an RGB image to the 3D surface of the human body. DensePose-RCNN is implemented in the Detectron framework and is powered by Caffe2.

In this repository, we provide the code to train and evaluate DensePose-RCNN. We also provide notebooks to visualize the collected DensePose-COCO dataset and show the correspondences to the SMPL model.

Important Note

!!! This project is no longer supported !!!

DensePose is now part of Detectron2 (https://github.com/facebookresearch/detectron2/tree/master/projects/DensePose). There you can find the most up to date architectures / models. If you think some feature is missing from there, please post an issue in Detectron2 DensePose.

Installation

Please find installation instructions for Caffe2 and DensePose in INSTALL.md, a document based on the Detectron installation instructions.

Inference-Training-Testing

After installation, please see GETTING_STARTED.md for examples of inference and training and testing.

Notebooks

Visualization of DensePose-COCO annotations:

See notebooks/DensePose-COCO-Visualize.ipynb to visualize the DensePose-COCO annotations on the images:


DensePose-COCO in 3D:

See notebooks/DensePose-COCO-on-SMPL.ipynb to localize the DensePose-COCO annotations on the 3D template (SMPL) model:


Visualize DensePose-RCNN Results:

See notebooks/DensePose-RCNN-Visualize-Results.ipynb to visualize the inferred DensePose-RCNN Results.


DensePose-RCNN Texture Transfer:

See notebooks/DensePose-RCNN-Texture-Transfer.ipynb to localize the DensePose-COCO annotations on the 3D template (SMPL) model:

License

This source code is licensed under the license found in the LICENSE file in the root directory of this source tree.

Citing DensePose

If you use Densepose, please use the following BibTeX entry.

  @InProceedings{Guler2018DensePose,
  title={DensePose: Dense Human Pose Estimation In The Wild},
  author={R\{i}za Alp G\"uler, Natalia Neverova, Iasonas Kokkinos},
  journal={The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2018}
  }