Convert Figma logo to code with AI

IntelRealSense logolibrealsense

Intel® RealSense™ SDK

7,579
4,818
7,579
405

Top Related Projects

Drivers and libraries for the Xbox Kinect device on Windows, Linux, and OS X

A cross platform (Linux and Windows) user mode SDK to read data from your Azure Kinect device.

Quick Overview

librealsense is an open-source, cross-platform SDK for Intel RealSense depth cameras. It provides a comprehensive set of tools and APIs for accessing and manipulating data from RealSense devices, enabling developers to create applications that utilize depth sensing, 3D imaging, and computer vision capabilities.

Pros

  • Extensive support for various Intel RealSense devices and their features
  • Cross-platform compatibility (Windows, Linux, macOS, Android)
  • Rich set of tools, including viewer applications and debug utilities
  • Active community and regular updates from Intel

Cons

  • Limited support for non-Intel depth cameras
  • Steep learning curve for beginners
  • Some users report occasional stability issues on certain platforms
  • Documentation can be overwhelming due to the wide range of features

Code Examples

  1. Initializing a RealSense pipeline and capturing frames:
#include <librealsense2/rs.hpp>

rs2::pipeline pipe;
pipe.start();

while (true) {
    rs2::frameset frames = pipe.wait_for_frames();
    rs2::depth_frame depth = frames.get_depth_frame();
    rs2::video_frame color = frames.get_color_frame();
    
    // Process frames here
}
  1. Accessing depth data and converting to meters:
rs2::depth_frame depth_frame = frames.get_depth_frame();
float depth_value = depth_frame.get_distance(x, y);
std::cout << "Depth at pixel (" << x << ", " << y << "): " << depth_value << " meters" << std::endl;
  1. Applying spatial filtering to depth data:
rs2::spatial_filter spatial_filter;
rs2::depth_frame filtered_depth = spatial_filter.process(depth_frame);
  1. Using point cloud generation:
rs2::pointcloud pc;
rs2::points points = pc.calculate(depth_frame);
auto vertices = points.get_vertices();
// Access 3D coordinates of each point

Getting Started

  1. Install librealsense:

    • On Ubuntu: sudo apt-get install librealsense2-dkms librealsense2-utils librealsense2-dev
    • On Windows: Download and run the installer from the GitHub releases page
  2. Include the library in your C++ project:

    #include <librealsense2/rs.hpp>
    
  3. Link against the library:

    • On Linux: -lrealsense2
    • On Windows: Add the library path and link against realsense2.lib
  4. Initialize a pipeline and start capturing:

    rs2::pipeline pipe;
    pipe.start();
    rs2::frameset frames = pipe.wait_for_frames();
    
  5. Refer to the examples and documentation for more advanced usage and specific features.

Competitor Comparisons

Drivers and libraries for the Xbox Kinect device on Windows, Linux, and OS X

Pros of libfreenect

  • Open-source and community-driven, allowing for greater flexibility and customization
  • Supports a wider range of Kinect devices, including older models
  • Lighter weight and potentially faster for specific use cases

Cons of libfreenect

  • Less comprehensive documentation compared to librealsense
  • Fewer built-in processing algorithms and features
  • Limited support for newer depth sensing technologies

Code Comparison

libfreenect example:

freenect_context *f_ctx;
freenect_device *f_dev;
freenect_init(&f_ctx, NULL);
freenect_open_device(f_ctx, &f_dev, 0);
freenect_start_depth(f_dev);

librealsense example:

rs2::pipeline pipe;
rs2::config cfg;
cfg.enable_stream(RS2_STREAM_DEPTH);
pipe.start(cfg);
auto frames = pipe.wait_for_frames();
auto depth = frames.get_depth_frame();

Both libraries provide APIs for accessing depth data, but librealsense offers a more modern and feature-rich interface with built-in configuration options and frame handling. libfreenect's API is simpler but may require more manual setup for advanced features.

A cross platform (Linux and Windows) user mode SDK to read data from your Azure Kinect device.

Pros of Azure-Kinect-Sensor-SDK

  • Better integration with Azure cloud services and AI capabilities
  • More comprehensive documentation and official Microsoft support
  • Advanced body tracking and skeletal detection features

Cons of Azure-Kinect-Sensor-SDK

  • Limited to Azure Kinect hardware, less versatile than librealsense
  • Steeper learning curve for developers new to Microsoft ecosystems
  • Less frequent updates and smaller community compared to librealsense

Code Comparison

Azure-Kinect-Sensor-SDK:

k4a_device_t device = NULL;
k4a_device_open(K4A_DEVICE_DEFAULT, &device);
k4a_device_configuration_t config = K4A_DEVICE_CONFIG_INIT_DISABLE_ALL;
config.color_format = K4A_IMAGE_FORMAT_COLOR_BGRA32;
k4a_device_start_cameras(device, &config);

librealsense:

rs2::pipeline pipe;
rs2::config cfg;
cfg.enable_stream(RS2_STREAM_COLOR, 640, 480, RS2_FORMAT_BGR8, 30);
pipe.start(cfg);
auto frames = pipe.wait_for_frames();
auto color_frame = frames.get_color_frame();

Both SDKs provide similar functionality for device initialization and configuration, but Azure-Kinect-Sensor-SDK uses a more verbose approach with explicit device opening and configuration. librealsense offers a more streamlined pipeline-based API, which may be easier for beginners to understand and use.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README




License Release Commits Issues GitHub CI Forks

Overview

Intel® RealSense™ SDK 2.0 is a cross-platform library for Intel® RealSense™ depth cameras.

:pushpin: For other Intel® RealSense™ devices (F200, R200, LR200 and ZR300), please refer to the latest legacy release.

The SDK allows depth and color streaming, and provides intrinsic and extrinsic calibration information. The library also offers synthetic streams (pointcloud, depth aligned to color and vise-versa), and a built-in support for record and playback of streaming sessions.

Developer kits containing the necessary hardware to use this library are available for purchase at store.intelrealsense.com. Information about the Intel® RealSense™ technology at www.intelrealsense.com

:open_file_folder: Don't have access to a RealSense camera? Check-out sample data

Update on Recent Changes to the RealSense Product Line

Intel has EOLed the LiDAR, Facial Authentication, and Tracking product lines. These products have been discontinued and will no longer be available for new orders.

Intel WILL continue to sell and support stereo products including the following: D410, D415, D430, , D401 ,D450 modules and D415, D435, D435i, D435f, D405, D455, D457 depth cameras. We will also continue the work to support and develop our LibRealSense open source SDK.

In the future, Intel and the RealSense team will focus our new development on advancing innovative technologies that better support our core businesses and IDM 2.0 strategy.

Building librealsense - Using vcpkg

You can download and install librealsense using the vcpkg dependency manager:

git clone https://github.com/Microsoft/vcpkg.git
cd vcpkg
./bootstrap-vcpkg.sh
./vcpkg integrate install
./vcpkg install realsense2

The librealsense port in vcpkg is kept up to date by Microsoft team members and community contributors. If the version is out of date, please create an issue or pull request on the vcpkg repository.

Download and Install

  • Download - The latest releases including the Intel RealSense SDK, Viewer and Depth Quality tools are available at: latest releases. Please check the release notes for the supported platforms, new features and capabilities, known issues, how to upgrade the Firmware and more.

  • Install - You can also install or build from source the SDK (on Linux \ Windows \ Mac OS \ Android \ Docker), connect your D400 depth camera and you are ready to start writing your first application.

Support & Issues: If you need product support (e.g. ask a question about / are having problems with the device), please check the FAQ & Troubleshooting section. If not covered there, please search our Closed GitHub Issues page, Community and Support sites. If you still cannot find an answer to your question, please open a new issue.

What’s included in the SDK:

WhatDescriptionDownload link
Intel® RealSense™ ViewerWith this application, you can quickly access your Intel® RealSense™ Depth Camera to view the depth stream, visualize point clouds, record and playback streams, configure your camera settings, modify advanced controls, enable depth visualization and post processing and much more.Intel.RealSense.Viewer.exe
Depth Quality ToolThis application allows you to test the camera’s depth quality, including: standard deviation from plane fit, normalized RMS – the subpixel accuracy, distance accuracy and fill rate. You should be able to easily get and interpret several of the depth quality metrics and record and save the data for offline analysis.Depth.Quality.Tool.exe
Debug ToolsDevice enumeration, FW logger, etc as can be seen at the tools directoryIncluded in Intel.RealSense.SDK.exe
Code SamplesThese simple examples demonstrate how to easily use the SDK to include code snippets that access the camera into your applications. Check some of the C++ examples including capture, pointcloud and more and basic C examplesIncluded in Intel.RealSense.SDK.exe
WrappersPython, C#/.NET API, as well as integration with the following 3rd-party technologies: ROS1, ROS2, LabVIEW, OpenCV, PCL, Unity, Matlab, OpenNI, UnrealEngine4 and more to come.

Ready to Hack!

Our library offers a high level API for using Intel RealSense depth cameras (in addition to lower level ones). The following snippet shows how to start streaming frames and extracting the depth value of a pixel:

// Create a Pipeline - this serves as a top-level API for streaming and processing frames
rs2::pipeline p;

// Configure and start the pipeline
p.start();

while (true)
{
    // Block program until frames arrive
    rs2::frameset frames = p.wait_for_frames();

    // Try to get a frame of a depth image
    rs2::depth_frame depth = frames.get_depth_frame();

    // Get the depth frame's dimensions
    float width = depth.get_width();
    float height = depth.get_height();

    // Query the distance from the camera to the object in the center of the image
    float dist_to_center = depth.get_distance(width / 2, height / 2);

    // Print the distance
    std::cout << "The camera is facing an object " << dist_to_center << " meters away \r";
}

For more information on the library, please follow our examples, and read the documentation to learn more.

Contributing

In order to contribute to Intel RealSense SDK, please follow our contribution guidelines.

License

This project is licensed under the Apache License, Version 2.0. Copyright 2018 Intel Corporation