Top Related Projects
An optimization-based multi-sensor state estimator
Real-Time SLAM for Monocular, Stereo and RGB-D Cameras, with Loop Detection and Relocalization Capabilities
ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual-Inertial and Multi-Map SLAM
LIO-SAM: Tightly-coupled Lidar Inertial Odometry via Smoothing and Mapping
A Robust and Versatile Monocular Visual-Inertial State Estimator
Quick Overview
Open-source Visual-Inertial Navigation System (Open-VINS) is a modular and extensible framework for visual-inertial odometry and SLAM. It provides a flexible and efficient implementation of state-of-the-art algorithms for sensor fusion, state estimation, and mapping, making it suitable for a wide range of applications, including mobile robotics, augmented reality, and autonomous vehicles.
Pros
- Modular and Extensible: The framework is designed to be highly modular, allowing users to easily integrate new sensors, algorithms, and processing pipelines.
- State-of-the-Art Algorithms: Open-VINS implements cutting-edge algorithms for visual-inertial odometry and SLAM, ensuring accurate and robust performance.
- Cross-Platform Compatibility: The project is designed to be cross-platform, supporting various operating systems and hardware platforms.
- Active Development and Community: The project has an active development team and a growing community of contributors, ensuring ongoing improvements and support.
Cons
- Steep Learning Curve: The framework's flexibility and complexity may present a steep learning curve for new users, especially those unfamiliar with visual-inertial navigation and SLAM.
- Computational Overhead: The advanced algorithms used in Open-VINS may require significant computational resources, which could be a limitation for some applications.
- Limited Documentation: While the project has good documentation, some users may find it lacking in certain areas, particularly for advanced use cases or customization.
- Dependency on External Libraries: Open-VINS relies on several external libraries, which may introduce additional complexity and potential compatibility issues.
Code Examples
// Initializing the VIO system
vio_system.initialize(params);
// Processing a new sensor measurement
vio_system.process_measurement(timestamp, imu_data, camera_data);
// Retrieving the current state estimate
Eigen::Vector3d position;
Eigen::Quaterniond orientation;
vio_system.get_state(position, orientation);
This code demonstrates the basic usage of the Open-VINS system, including initializing the VIO system, processing sensor measurements, and retrieving the current state estimate.
// Configuring the sensor parameters
vio_params.imu_rate = 200.0;
vio_params.num_cameras = 2;
vio_params.camera_params[0].fx = 500.0;
vio_params.camera_params[0].fy = 500.0;
vio_params.camera_params[0].cx = 320.0;
vio_params.camera_params[0].cy = 240.0;
This code snippet shows how to configure the sensor parameters for the Open-VINS system, including the IMU rate and camera intrinsic parameters.
// Registering a new camera
vio_system.add_camera(camera_params);
// Registering a new IMU
vio_system.add_imu(imu_params);
These code examples demonstrate how to register new sensors (cameras and IMUs) with the Open-VINS system, allowing for the integration of additional hardware.
Getting Started
To get started with Open-VINS, follow these steps:
-
Clone the repository:
git clone https://github.com/rpng/open_vins.git
-
Install the required dependencies, which include Eigen, Ceres Solver, and OpenCV.
-
Build the project using CMake:
cd open_vins mkdir build && cd build cmake .. make -j4
-
Run the example applications to test the system:
./run_euroc_example ./run_simulation_example
-
Explore the documentation and examples to learn how to integrate Open-VINS into your own project, configure the system, and customize the algorithms.
Competitor Comparisons
An optimization-based multi-sensor state estimator
Pros of VINS-Fusion
- More comprehensive sensor fusion, including GPS integration
- Better support for loop closure and relocalization
- More extensive documentation and examples
Cons of VINS-Fusion
- Slightly higher computational complexity
- Less frequent updates and maintenance
- More complex setup and configuration process
Code Comparison
VINS-Fusion initialization:
vins::VINSEstimator estimator;
estimator.setParameter();
estimator.initEstimator();
OpenVINS initialization:
ov_msckf::VioManager vio;
vio.init();
vio.feed_measurement_imu();
VINS-Fusion offers a more detailed initialization process, while OpenVINS provides a simpler interface. VINS-Fusion's code structure is more modular, allowing for easier customization of different components. OpenVINS, on the other hand, has a more streamlined codebase, which can be easier to understand and modify for specific use cases.
Both projects are actively maintained, but OpenVINS tends to have more frequent updates and contributions from the community. VINS-Fusion has a larger user base and more extensive documentation, which can be beneficial for newcomers to visual-inertial odometry.
Pros of rpg_svo_pro_open
- Supports a wider range of sensor configurations, including stereo and RGB-D cameras
- Provides a more robust and accurate visual-inertial odometry (VIO) system
- Includes advanced features like loop closure and global optimization
Cons of rpg_svo_pro_open
- Requires more complex setup and configuration compared to Open VINS
- May have higher computational requirements, especially on resource-constrained platforms
- Lacks the extensive documentation and community support of Open VINS
Code Comparison
Open VINS (rpng/open_vins):
// Initialization of the VIO system
vio.initialize(params);
// Process a new frame
vio.feed_measurement(timestamp, img, imu);
// Get the current state estimate
StateServer state = vio.get_state();
rpg_svo_pro_open (uzh-rpg/rpg_svo_pro_open):
// Create a new VIO frontend
vio_frontend.reset(new VIOFrontend(config));
// Process a new frame
vio_frontend->processFrame(timestamp, img, imu);
// Retrieve the current state estimate
Eigen::Isometry3d T_w_c = vio_frontend->getCurrentPose();
The key differences are the initialization and state retrieval methods, as well as the use of a dedicated VIOFrontend
class in rpg_svo_pro_open.
Real-Time SLAM for Monocular, Stereo and RGB-D Cameras, with Loop Detection and Relocalization Capabilities
Pros of ORB_SLAM2
- More mature and widely adopted in the SLAM community
- Better performance in feature-rich environments
- Supports loop closure for improved accuracy
Cons of ORB_SLAM2
- Limited sensor fusion capabilities
- Less suitable for high-dynamic environments
- Requires good lighting and texture for optimal performance
Code Comparison
ORB_SLAM2 (feature extraction):
void Frame::ExtractORB(int flag, const cv::Mat &im)
{
(*mpORBextractorLeft)(im,cv::Mat(),mvKeys,mDescriptors);
}
Open_VINS (feature tracking):
void TrackKLT::perform_detection(const std::vector<cv::Mat> &img0pyr, std::vector<cv::KeyPoint> &pts0)
{
cv::goodFeaturesToTrack(img0pyr.at(0), pts0, num_features, quality, min_distance);
}
ORB_SLAM2 focuses on ORB feature extraction, while Open_VINS uses KLT tracking for feature detection and tracking. This difference reflects their distinct approaches to visual odometry and SLAM.
ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual-Inertial and Multi-Map SLAM
Pros of ORB_SLAM3
- More versatile, supporting monocular, stereo, and RGB-D cameras
- Better loop closure and relocalization capabilities
- Faster processing speed for real-time applications
Cons of ORB_SLAM3
- Less accurate in high-dynamic environments
- Requires more computational resources
- Limited support for IMU integration
Code Comparison
ORB_SLAM3:
// Feature extraction
ORBextractor* mpORBextractorLeft;
ORBextractor* mpORBextractorRight;
// Main tracking function
void Tracking::Track()
{
// ... (tracking logic)
}
OpenVINS:
// IMU propagation
void Propagator::propagate_and_clone(double timestamp)
{
// ... (propagation logic)
}
// Update with visual measurements
void UpdaterHelper::update(State *state, std::vector<Type *> &order_NEW, std::vector<Type *> &order_OLD)
{
// ... (update logic)
LIO-SAM: Tightly-coupled Lidar Inertial Odometry via Smoothing and Mapping
Pros of LIO-SAM
- Integrates LiDAR, IMU, and GPS data for more robust localization and mapping
- Utilizes factor graph optimization for improved accuracy and loop closure
- Supports real-time performance on various platforms
Cons of LIO-SAM
- May require more computational resources due to LiDAR processing
- Potentially less suitable for environments with limited visual features
- Dependency on LiDAR hardware can increase system cost
Code Comparison
LIO-SAM (C++):
pcl::PointCloud<PointType>::Ptr extractCloud(const sensor_msgs::PointCloud2ConstPtr& laserCloudMsg)
{
pcl::PointCloud<PointType>::Ptr cloudIn(new pcl::PointCloud<PointType>);
pcl::fromROSMsg(*laserCloudMsg, *cloudIn);
return cloudIn;
}
Open_VINS (C++):
void feed_measurement_imu(const ov_core::ImuData &message) {
std::lock_guard<std::mutex> lck(mtx);
imu_data.emplace_back(message);
sort(imu_data.begin(), imu_data.end());
}
Both repositories focus on state estimation and mapping, but LIO-SAM emphasizes LiDAR integration, while Open_VINS primarily uses visual-inertial data. LIO-SAM's code snippet demonstrates LiDAR point cloud processing, whereas Open_VINS shows IMU data handling. The choice between these systems depends on the specific application requirements, available sensors, and computational resources.
A Robust and Versatile Monocular Visual-Inertial State Estimator
Pros of VINS-Mono
- More established and widely used in the research community
- Supports loop closure for improved accuracy in long trajectories
- Includes a comprehensive initialization process for robust system startup
Cons of VINS-Mono
- Less actively maintained compared to OpenVINS
- May have higher computational requirements, potentially limiting real-time performance on resource-constrained platforms
- Limited support for multi-sensor fusion beyond visual-inertial data
Code Comparison
VINS-Mono (feature tracking):
void FeatureTracker::readImage(const cv::Mat &_img, double _cur_time)
{
cv::Mat img;
TicToc t_r;
cur_time = _cur_time;
if (EQUALIZE)
{
cv::Ptr<cv::CLAHE> clahe = cv::createCLAHE(3.0, cv::Size(8, 8));
TicToc t_c;
clahe->apply(_img, img);
ROS_DEBUG("CLAHE costs: %fms", t_c.toc());
}
else
img = _img;
OpenVINS (feature tracking):
void TrackKLT::feed_new_camera(const CameraData &message) {
// Animate our last image
img_last = img_curr.clone();
img_curr = message.image.clone();
// If we are using a mask, then lets apply it
if (message.mask.rows == message.image.rows && message.mask.cols == message.image.cols) {
cv::bitwise_and(img_curr, img_curr, img_curr, message.mask);
}
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
OpenVINS
Welcome to the OpenVINS project! The OpenVINS project houses some core computer vision code along with a state-of-the art filter-based visual-inertial estimator. The core filter is an Extended Kalman filter which fuses inertial information with sparse visual feature tracks. These visual feature tracks are fused leveraging the Multi-State Constraint Kalman Filter (MSCKF) sliding window formulation which allows for 3D features to update the state estimate without directly estimating the feature states in the filter. Inspired by graph-based optimization systems, the included filter has modularity allowing for convenient covariance management with a proper type-based state system. Please take a look at the feature list below for full details on what the system supports.
- Github project page - https://github.com/rpng/open_vins
- Documentation - https://docs.openvins.com/
- Getting started guide - https://docs.openvins.com/getting-started.html
- Publication reference - https://pgeneva.com/downloads/papers/Geneva2020ICRA.pdf
News / Events
- May 11, 2023 - Inertial intrinsic support released as part of v2.7 along with a few bug fixes and improvements to stereo KLT tracking. Please check out the release page for details.
- April 15, 2023 - Minor update to v2.6.3 to support incremental feature triangulation of active features for downstream applications, faster zero-velocity update, small bug fixes, some example realsense configurations, and cached fast state prediction. Please check out the release page for details.
- April 3, 2023 - We have released a monocular plane-aided VINS, termed ov_plane, which leverages the OpenVINS project. Both now support the released Indoor AR Table dataset.
- July 14, 2022 - Improved feature extraction logic for >100hz tracking, some bug fixes and updated scripts. See v2.6.1 PR#259 and v2.6.2 PR#264.
- March 14, 2022 - Initial dynamic initialization open sourcing, asynchronous subscription to inertial readings and publishing of odometry, support for lower frequency feature tracking. See v2.6 PR#232 for details.
- December 13, 2021 - New YAML configuration system, ROS2 support, Docker images, robust static initialization based on disparity, internal logging system to reduce verbosity, image transport publishers, dynamic number of features support, and other small fixes. See v2.5 PR#209 for details.
- July 19, 2021 - Camera classes, masking support, alignment utility, and other small fixes. See v2.4 PR#117 for details.
- December 1, 2020 - Released improved memory management, active feature pointcloud publishing, limiting number of features in update to bound compute, and other small fixes. See v2.3 PR#117 for details.
- November 18, 2020 - Released groundtruth generation utility package, vicon2gt to enable creation of groundtruth trajectories in a motion capture room for evaulating VIO methods.
- July 7, 2020 - Released zero velocity update for vehicle applications and direct initialization when standing still. See PR#79 for details.
- May 18, 2020 - Released secondary pose graph example repository ov_secondary based on VINS-Fusion. OpenVINS now publishes marginalized feature track, feature 3d position, and first camera intrinsics and extrinsics. See PR#66 for details and discussion.
- April 3, 2020 - Released v2.0 update to the codebase with some key refactoring, ros-free building, improved dataset support, and single inverse depth feature representation. Please check out the release page for details.
- January 21, 2020 - Our paper has been accepted for presentation in ICRA 2020. We look forward to seeing everybody there! We have also added links to a few videos of the system running on different datasets.
- October 23, 2019 - OpenVINS placed first in the IROS 2019 FPV Drone Racing VIO Competition . We will be giving a short presentation at the workshop at 12:45pm in Macau on November 8th.
- October 1, 2019 - We will be presenting at the Visual-Inertial Navigation: Challenges and Applications workshop at IROS 2019. The submitted workshop paper can be found at this link.
- August 21, 2019 - Open sourced ov_maplab for interfacing OpenVINS with the maplab library.
- August 15, 2019 - Initial release of OpenVINS repository and documentation website!
Project Features
- Sliding window visual-inertial MSCKF
- Modular covariance type system
- Comprehensive documentation and derivations
- Extendable visual-inertial simulator
- On manifold SE(3) b-spline
- Arbitrary number of cameras
- Arbitrary sensor rate
- Automatic feature generation
- Five different feature representations
- Global XYZ
- Global inverse depth
- Anchored XYZ
- Anchored inverse depth
- Anchored MSCKF inverse depth
- Anchored single inverse depth
- Calibration of sensor intrinsics and extrinsics
- Camera to IMU transform
- Camera to IMU time offset
- Camera intrinsics
- Inertial intrinsics (including g-sensitivity)
- Environmental SLAM feature
- OpenCV ARUCO tag SLAM features
- Sparse feature SLAM features
- Visual tracking support
- Monocular camera
- Stereo camera (synchronized)
- Binocular cameras (synchronized)
- KLT or descriptor based
- Masked tracking
- Static and dynamic state initialization
- Zero velocity detection and updates
- Out of the box evaluation on EuRocMav, TUM-VI, UZH-FPV, KAIST Urban and other VIO datasets
- Extensive evaluation suite (ATE, RPE, NEES, RMSE, etc..)
Codebase Extensions
-
ov_plane - A real-time monocular visual-inertial odometry (VIO) system which leverages environmental planes. At the core it presents an efficient robust monocular-based plane detection algorithm which does not require additional sensing modalities such as a stereo, depth camera or neural network. The plane detection and tracking algorithm enables real-time regularization of point features to environmental planes which are either maintained in the state vector as long-lived planes, or marginalized for efficiency. Planar regularities are applied to both in-state SLAM and out-of-state MSCKF point features, enabling long-term point-to-plane loop-closures due to the large spacial volume of planes.
-
vicon2gt - This utility was created to generate groundtruth trajectories using a motion capture system (e.g. Vicon or OptiTrack) for use in evaluating visual-inertial estimation systems. Specifically we calculate the inertial IMU state (full 15 dof) at camera frequency rate and generate a groundtruth trajectory similar to those provided by the EurocMav datasets. Performs fusion of inertial and motion capture information and estimates all unknown spacial-temporal calibrations between the two sensors.
-
ov_maplab - This codebase contains the interface wrapper for exporting visual-inertial runs from OpenVINS into the ViMap structure taken by maplab. The state estimates and raw images are appended to the ViMap as OpenVINS runs through a dataset. After completion of the dataset, features are re-extract and triangulate with maplab's feature system. This can be used to merge multi-session maps, or to perform a batch optimization after first running the data through OpenVINS. Some example have been provided along with a helper script to export trajectories into the standard groundtruth format.
-
ov_secondary - This is an example secondary thread which provides loop closure in a loosely coupled manner for OpenVINS. This is a modification of the code originally developed by the HKUST aerial robotics group and can be found in their VINS-Fusion repository. Here we stress that this is a loosely coupled method, thus no information is returned to the estimator to improve the underlying OpenVINS odometry. This codebase has been modified in a few key areas including: exposing more loop closure parameters, subscribing to camera intrinsics, simplifying configuration such that only topics need to be supplied, and some tweaks to the loop closure detection to improve frequency.
Demo Videos
Credit / Licensing
This code was written by the Robot Perception and Navigation Group (RPNG) at the University of Delaware. If you have any issues with the code please open an issue on our github page with relevant implementation details and references. For researchers that have leveraged or compared to this work, please cite the following:
@Conference{Geneva2020ICRA,
Title = {{OpenVINS}: A Research Platform for Visual-Inertial Estimation},
Author = {Patrick Geneva and Kevin Eckenhoff and Woosik Lee and Yulin Yang and Guoquan Huang},
Booktitle = {Proc. of the IEEE International Conference on Robotics and Automation},
Year = {2020},
Address = {Paris, France},
Url = {\url{https://github.com/rpng/open_vins}}
}
The codebase and documentation is licensed under the GNU General Public License v3 (GPL-3). You must preserve the copyright and license notices in your derivative work and make available the complete source code with modifications under the same license (see this; this is not legal advice).
Top Related Projects
An optimization-based multi-sensor state estimator
Real-Time SLAM for Monocular, Stereo and RGB-D Cameras, with Loop Detection and Relocalization Capabilities
ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual-Inertial and Multi-Map SLAM
LIO-SAM: Tightly-coupled Lidar Inertial Odometry via Smoothing and Mapping
A Robust and Versatile Monocular Visual-Inertial State Estimator
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot