Convert Figma logo to code with AI

MIT-SPARK logoKimera

Index repo for Kimera code

1,790
228
1,790
1

Top Related Projects

An optimization-based multi-sensor state estimator

Real-Time SLAM for Monocular, Stereo and RGB-D Cameras, with Loop Detection and Relocalization Capabilities

ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual-Inertial and Multi-Map SLAM

Cartographer is a system that provides real-time simultaneous localization and mapping (SLAM) in 2D and 3D across multiple platforms and sensor configurations.

4,278

The Kalibr visual-inertial calibration toolbox

Quick Overview

Kimera is an open-source C++ library for real-time metric-semantic simultaneous localization and mapping (SLAM). It combines visual-inertial odometry, pose graph optimization, and 3D mesh reconstruction to create a comprehensive SLAM system. Kimera aims to provide accurate localization and dense semantic 3D reconstruction for robotics and augmented reality applications.

Pros

  • Combines multiple SLAM techniques for improved accuracy and robustness
  • Supports real-time performance on consumer-grade hardware
  • Provides both metric (geometric) and semantic (object-level) mapping
  • Actively maintained with regular updates and improvements

Cons

  • Steep learning curve due to the complexity of the system
  • Limited documentation for advanced usage and customization
  • Requires careful parameter tuning for optimal performance
  • Dependency on specific sensor configurations (e.g., stereo cameras and IMU)

Code Examples

  1. Initializing Kimera VIO:
#include <kimera-vio/pipeline/Pipeline.h>

gtsam::Pose3 initial_W_Body = gtsam::Pose3::identity();
VioParams params = VioParams::fromRosFile("path/to/params.yaml");
Pipeline vio_pipeline(initial_W_Body, params);
  1. Processing a new frame:
#include <kimera-vio/frontend/StereoVisionFrontEnd.h>

StereoFrame::UniquePtr stereo_frame = createStereoFrame(); // Create from sensor data
vio_pipeline.spinOnce(std::move(stereo_frame));
  1. Accessing the 3D mesh:
#include <kimera-vio/mesh/Mesher.h>

const Mesh3D& mesh = vio_pipeline.spinOnce().mesh_;
for (const auto& vertex : mesh.vertices_) {
    // Process vertex data
}

Getting Started

  1. Clone the repository:

    git clone https://github.com/MIT-SPARK/Kimera.git
    
  2. Install dependencies:

    sudo apt-get install libgtsam-dev libopencv-dev libyaml-cpp-dev
    
  3. Build Kimera:

    cd Kimera
    mkdir build && cd build
    cmake ..
    make -j4
    
  4. Run the example:

    ./kimera_vio_ros path/to/params.yaml path/to/data/
    

Competitor Comparisons

An optimization-based multi-sensor state estimator

Pros of VINS-Fusion

  • More mature and widely adopted in the robotics community
  • Supports multiple sensor configurations (stereo, mono, stereo+IMU, mono+IMU)
  • Extensive documentation and tutorials available

Cons of VINS-Fusion

  • Less focus on 3D mesh reconstruction compared to Kimera
  • May have higher computational requirements in some scenarios
  • Limited support for loop closure in large-scale environments

Code Comparison

VINS-Fusion (C++):

void System::ProcessIMU(double t, const Vector3d &linear_acceleration, const Vector3d &angular_velocity)
{
    if (!initialized_)
        return;
    estimator.processIMU(t, linear_acceleration, angular_velocity);
}

Kimera (C++):

void VioBackEnd::addImuMeasurement(const ImuMeasurement& imu_measurement) {
  CHECK(imu_measurement.timestamp_ >= timestamp_lkf_)
      << "Imu measurement cannot be older than oldest state in smoother.";
  imu_queue_.push(imu_measurement);
}

Both repositories use similar approaches for processing IMU data, but Kimera's implementation appears more focused on timestamp checks and queue management.

Real-Time SLAM for Monocular, Stereo and RGB-D Cameras, with Loop Detection and Relocalization Capabilities

Pros of ORB_SLAM2

  • Lightweight and efficient, suitable for real-time applications
  • Well-established and widely used in the robotics community
  • Supports monocular, stereo, and RGB-D cameras

Cons of ORB_SLAM2

  • Limited to visual SLAM, lacking multi-sensor fusion capabilities
  • Less robust in dynamic environments or scenes with limited features
  • Requires manual loop closure for optimal performance

Code Comparison

ORB_SLAM2 (feature extraction):

void Frame::ExtractORB(int flag, const cv::Mat &im)
{
    (*mpORBextractorLeft)(im,cv::Mat(),mvKeys,mDescriptors);
}

Kimera (feature extraction):

void Frame::extractFeatures(const cv::Mat& img) {
  feature_detector_->detectAndCompute(
      img, cv::Mat(), keypoints_, descriptors_);
}

Both systems use similar approaches for feature extraction, but Kimera's implementation is more modular and allows for easier integration of different feature detectors.

ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual-Inertial and Multi-Map SLAM

Pros of ORB_SLAM3

  • More mature and widely adopted in the SLAM community
  • Supports monocular, stereo, and RGB-D cameras
  • Includes loop closing and relocalization capabilities

Cons of ORB_SLAM3

  • Limited to visual SLAM, lacking multi-sensor fusion
  • Less focus on semantic understanding of the environment
  • Requires careful parameter tuning for optimal performance

Code Comparison

ORB_SLAM3:

// Feature extraction and matching
void Frame::ExtractORB(int flag, const cv::Mat &im)
{
    (*mpORBextractorLeft)(im, cv::Mat(), mvKeys, mDescriptors);
}

Kimera:

// VIO pipeline
void VioBackEnd::spinOnce(const FrontendOutput::Ptr& frontend_output) {
  // Process visual-inertial data
  processVioBackEnd(frontend_output);
}

ORB_SLAM3 focuses on feature extraction and matching using ORB features, while Kimera implements a more comprehensive visual-inertial odometry pipeline. ORB_SLAM3's code is more specialized for visual SLAM, whereas Kimera's approach integrates multiple sensor inputs and processing stages.

Cartographer is a system that provides real-time simultaneous localization and mapping (SLAM) in 2D and 3D across multiple platforms and sensor configurations.

Pros of Cartographer

  • More mature and widely adopted in industry
  • Supports a broader range of sensor inputs
  • Better documentation and community support

Cons of Cartographer

  • Higher computational requirements
  • Less focus on visual-inertial odometry
  • More complex setup and configuration

Code Comparison

Kimera (C++):

VioBackEnd::VioBackEnd(const BackendParams& params)
    : backend_params_(params),
      debug_info_(nullptr),
      vio_update_finished_(false) {
  initializeBackend();
}

Cartographer (C++):

MapBuilder::MapBuilder(const proto::MapBuilderOptions& options)
    : options_(options), thread_pool_(options.num_background_threads()) {
  sensor_collator_ = common::make_unique<sensor::Collator>();
  sensor_collator_->AddTrajectory(
      0, std::set<std::string>(options.expected_range_sensor_ids().begin(),
                               options.expected_range_sensor_ids().end()));
}

Both projects use C++ and follow object-oriented programming principles. Kimera's code appears more focused on visual-inertial odometry, while Cartographer's code shows its multi-sensor fusion approach and trajectory handling.

4,278

The Kalibr visual-inertial calibration toolbox

Pros of Kalibr

  • Specialized tool for sensor calibration, particularly for camera-IMU systems
  • Supports a wide range of sensor configurations and calibration patterns
  • Well-established and widely used in the robotics community

Cons of Kalibr

  • Limited to calibration tasks, not a full SLAM or visual-inertial odometry solution
  • May require more manual setup and parameter tuning compared to Kimera

Code Comparison

Kalibr (Python-based calibration script):

import kalibr_common as kc
import kalibr_imu_camera_calibration as kicl

calibrator = kicl.ImuCameraCalibrator()
calibrator.loadDataset(bag_file)
calibrator.calibrate()
calibrator.printResults()

Kimera (C++ visual-inertial odometry pipeline):

#include <kimera-vio/pipeline/Pipeline.h>

KimeraVIO::Pipeline pipeline(FLAGS_params_folder);
pipeline.spinOnline(FLAGS_dataset_path);
pipeline.shutdown();

The code snippets highlight the different focus areas of the two projects: Kalibr for sensor calibration and Kimera for visual-inertial odometry. Kalibr's code is centered around calibration procedures, while Kimera's code demonstrates its use as a complete VIO pipeline.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

Kimera

Kimera is a C++ library for real-time metric-semantic simultaneous localization and mapping, which uses camera images and inertial data to build a semantically annotated 3D mesh of the environment. Kimera is modular, ROS-enabled, and runs on a CPU.

Kimera comprises four modules:

  • A fast and accurate Visual Inertial Odometry (VIO) pipeline (Kimera-VIO)
  • A full SLAM implementation based on Robust Pose Graph Optimization (Kimera-RPGO)
  • A per-frame and multi-frame 3D mesh generator (Kimera-Mesher)
  • And a generator of semantically annotated 3D meshes (Kimera-Semantics)

Kimera

Click on the following links to install Kimera's modules and get started! It is very easy to install!

Kimera-VIO & Kimera-Mesher

Kimera-RPGO

Kimera-Semantics

Chart

overall_chart

Citation

If you found any of the above modules useful, we would really appreciate if you could cite our work:

@InProceedings{Rosinol19icra-incremental,
  title = {Incremental visual-inertial 3d mesh generation with structural regularities},
  author = {Rosinol, Antoni and Sattler, Torsten and Pollefeys, Marc and Carlone, Luca},
  year = {2019},
  booktitle = {2019 International Conference on Robotics and Automation (ICRA)},
  pdf = {https://arxiv.org/pdf/1903.01067.pdf}
}
@InProceedings{Rosinol20icra-Kimera,
  title = {Kimera: an Open-Source Library for Real-Time Metric-Semantic Localization and Mapping},
  author = {Rosinol, Antoni and Abate, Marcus and Chang, Yun and Carlone, Luca},
  year = {2020},
  booktitle = {IEEE Intl. Conf. on Robotics and Automation (ICRA)},
  url = {https://github.com/MIT-SPARK/Kimera},
  pdf = {https://arxiv.org/pdf/1910.02490.pdf}
}
@InProceedings{Rosinol20rss-dynamicSceneGraphs,
  title = {{3D} Dynamic Scene Graphs: Actionable Spatial Perception with Places, Objects, and Humans},
  author = {A. Rosinol and A. Gupta and M. Abate and J. Shi and L. Carlone},
  year = {2020},
  booktitle = {Robotics: Science and Systems (RSS)},
  pdf = {https://arxiv.org/pdf/2002.06289.pdf}
}
@InProceedings{Rosinol21arxiv-Kimera,
  title = {{K}imera: from {SLAM} to Spatial Perception with {3D} Dynamic Scene Graphs},
  author = {A. Rosinol, A. Violette, M. Abate, N. Hughes, Y. Chang, J. Shi, A. Gupta, L. Carlone},
  year = {2021},
  booktitle = {arxiv},
  pdf = {https://arxiv.org/pdf/2101.06894.pdf}
}

Open-Source Datasets

In addition to the real-life tests on the Euroc dataset, we use a photo-realistic Unity-based simulator to test Kimera. The simulator provides:

  • RGB Stereo camera
  • Depth camera
  • Ground-truth 2D Semantic Segmentation
  • IMU data
  • Ground-Truth Odometry
  • 2D Lidar
  • TF (ground-truth odometry of robots, and agents)
  • Static TF (ground-truth poses of static objects)

Using this simulator, we created several large visual-inertial datasets which feature scenes with and without dynamic agents (humans), as well as a large variety of environments (indoors and outdoors, small and large). These are ideal to test your Metric-Semantic SLAM and/or other Spatial-AI systems!

Acknowledgments

Kimera is partially funded by ARL DCIST, ONR RAIDER, MIT Lincoln Laboratory, and “la Caixa” Foundation (ID 100010434), LCF/BQ/AA18/11680088 (A. Rosinol).

License

BSD License