Top Related Projects
A modular high-level library to train embodied AI agents across a variety of tasks and environments.
A toolkit for developing and comparing reinforcement learning algorithms.
The Unity Machine Learning Agents Toolkit (ML-Agents) is an open-source project that enables games and simulations to serve as environments for training intelligent agents using deep reinforcement learning and imitation learning.
Google DeepMind's software stack for physics-based simulation and Reinforcement Learning environments, using MuJoCo.
OpenAI Baselines: high-quality implementations of reinforcement learning algorithms
Quick Overview
Habitat-sim is a flexible, high-performance 3D simulator for embodied AI research. It is designed to support training and testing of autonomous agents in realistic 3D environments, with a focus on tasks like visual navigation, object manipulation, and language grounding.
Pros
- High performance and scalability, allowing for efficient large-scale training of AI agents
- Modular design that supports easy integration with various sensors, environments, and embodiments
- Rich set of features including physics simulation, rendering, and support for multiple embodiments
- Active development and support from Facebook AI Research (FAIR)
Cons
- Steep learning curve for beginners due to its complexity and extensive feature set
- Limited documentation for some advanced features and customizations
- Requires significant computational resources for optimal performance
- Primarily focused on research applications, which may limit its use in production environments
Code Examples
- Loading a scene and agent:
import habitat_sim
# Create configuration
cfg = habitat_sim.SimulatorConfiguration()
cfg.scene_id = "path/to/scene.glb"
# Create agent configuration
agent_cfg = habitat_sim.agent.AgentConfiguration()
# Create simulator instance
sim = habitat_sim.Simulator(habitat_sim.Configuration(cfg, [agent_cfg]))
- Performing a simple navigation action:
import numpy as np
# Get the agent's state
agent = sim.get_agent(0)
state = agent.get_state()
# Move the agent forward
action = habitat_sim.agent.ActionSpec("move_forward", {})
observations = sim.step(action)
# Get the new position
new_position = agent.get_state().position
print(f"New position: {new_position}")
- Rendering an observation:
import cv2
# Get the RGB sensor observation
rgb_obs = observations["rgb"]
# Convert to BGR for OpenCV
bgr_obs = cv2.cvtColor(rgb_obs, cv2.COLOR_RGB2BGR)
# Display the observation
cv2.imshow("RGB Observation", bgr_obs)
cv2.waitKey(0)
cv2.destroyAllWindows()
Getting Started
- Install habitat-sim:
conda create -n habitat python=3.7 cmake=3.14.0
conda activate habitat
conda install habitat-sim withbullet -c conda-forge -c aihabitat
- Clone the repository:
git clone https://github.com/facebookresearch/habitat-sim.git
cd habitat-sim
- Run a simple example:
import habitat_sim
import numpy as np
cfg = habitat_sim.SimulatorConfiguration()
cfg.scene_id = "path/to/scene.glb"
sim = habitat_sim.Simulator(habitat_sim.Configuration(cfg, []))
random_state = np.random.RandomState(2)
for _ in range(100):
action = random_state.choice(["move_forward", "turn_left", "turn_right"])
observations = sim.step(action)
print(f"Action: {action}, Sensor output shape: {observations['rgb'].shape}")
Competitor Comparisons
A modular high-level library to train embodied AI agents across a variety of tasks and environments.
Pros of Habitat-Lab
- Higher-level API for AI agent training and evaluation
- Includes task-specific environments and benchmarks
- Supports multi-agent scenarios and hierarchical planning
Cons of Habitat-Lab
- Depends on Habitat-Sim for core functionality
- May have a steeper learning curve for beginners
- Limited to environments supported by Habitat-Sim
Code Comparison
Habitat-Sim (low-level physics and rendering):
import habitat_sim
cfg = habitat_sim.SimulatorConfiguration()
sim = habitat_sim.Simulator(cfg)
agent = sim.initialize_agent(agent_id=0)
sim.step("move_forward")
Habitat-Lab (high-level task and agent management):
import habitat
env = habitat.Env(config=habitat.get_config("tasks/pointnav.yaml"))
obs = env.reset()
action = agent.act(obs)
obs, reward, done, info = env.step(action)
A toolkit for developing and comparing reinforcement learning algorithms.
Pros of Gym
- Broader range of environments, including classic control, robotics, and Atari games
- Simpler API and easier to get started for beginners
- Larger community and more extensive documentation
Cons of Gym
- Less realistic 3D environments compared to Habitat-sim
- Limited support for complex, multi-agent scenarios
- Fewer options for customizing environment physics and rendering
Code Comparison
Gym:
import gym
env = gym.make('CartPole-v1')
observation = env.reset()
for _ in range(1000):
action = env.action_space.sample()
observation, reward, done, info = env.step(action)
Habitat-sim:
import habitat_sim
cfg = habitat_sim.SimulatorConfiguration()
cfg.scene_id = "path/to/scene.glb"
sim = habitat_sim.Simulator(cfg)
agent = sim.initialize_agent(agent_id=0)
observations = sim.step("move_forward")
The Gym example shows a simple environment setup and interaction loop, while the Habitat-sim code demonstrates scene loading and agent initialization. Habitat-sim offers more detailed control over 3D environments, while Gym provides a more straightforward interface for various types of reinforcement learning tasks.
The Unity Machine Learning Agents Toolkit (ML-Agents) is an open-source project that enables games and simulations to serve as environments for training intelligent agents using deep reinforcement learning and imitation learning.
Pros of ml-agents
- Integrated with Unity game engine, providing a rich visual environment and physics simulation
- Supports a wide range of learning algorithms, including PPO, SAC, and imitation learning
- Offers a user-friendly interface for setting up and training agents within Unity
Cons of ml-agents
- Limited to Unity environment, potentially less flexible for custom simulations
- May have higher computational requirements due to the full game engine backend
- Learning curve for users unfamiliar with Unity development
Code Comparison
ml-agents:
from mlagents_envs.environment import UnityEnvironment
env = UnityEnvironment(file_name="MyUnityEnvironment")
behavior_name = list(env.behavior_specs)[0]
decision_steps, terminal_steps = env.get_steps(behavior_name)
habitat-sim:
import habitat_sim
cfg = habitat_sim.SimulatorConfiguration()
cfg.scene_id = "path/to/scene.glb"
sim = habitat_sim.Simulator(cfg)
agent = sim.initialize_agent(agent_id=0)
Both repositories provide powerful tools for reinforcement learning in simulated environments. ml-agents excels in visual fidelity and ease of use within the Unity ecosystem, while habitat-sim offers more flexibility for custom simulations and potentially lighter computational requirements. The choice between them depends on specific project needs and familiarity with the respective platforms.
Google DeepMind's software stack for physics-based simulation and Reinforcement Learning environments, using MuJoCo.
Pros of dm_control
- Focuses on continuous control tasks, ideal for robotics and physics-based simulations
- Integrates seamlessly with DeepMind's machine learning libraries
- Provides a wide range of pre-built environments and tasks
Cons of dm_control
- Limited to physics-based simulations, less suitable for complex 3D environments
- Smaller community and ecosystem compared to Habitat-sim
- Less emphasis on photorealistic rendering and real-world applications
Code Comparison
dm_control:
from dm_control import suite
env = suite.load(domain_name="cartpole", task_name="swingup")
timestep = env.reset()
action = env.action_spec().generate_value()
next_timestep = env.step(action)
Habitat-sim:
import habitat_sim
cfg = habitat_sim.SimulatorConfiguration()
cfg.scene_id = "path/to/scene.glb"
sim = habitat_sim.Simulator(cfg)
agent = sim.initialize_agent(agent_id=0)
observations = sim.step("move_forward")
The code snippets demonstrate the different focus areas of each library. dm_control emphasizes physics-based control tasks, while Habitat-sim is geared towards 3D environment simulation and navigation.
OpenAI Baselines: high-quality implementations of reinforcement learning algorithms
Pros of Baselines
- Broader focus on reinforcement learning algorithms across various domains
- Extensive documentation and tutorials for getting started
- Large community and widespread adoption in RL research
Cons of Baselines
- Less specialized for 3D environments and embodied AI tasks
- May require more setup and configuration for specific use cases
- Potentially steeper learning curve for beginners in RL
Code Comparison
Habitat-sim (C++):
auto scene = sim->createScene();
auto agent = scene->createAgent(agentConfig);
auto observations = sim->step(action);
Baselines (Python):
env = gym.make("CartPole-v1")
model = PPO("MlpPolicy", env, verbose=1)
model.learn(total_timesteps=10000)
Key Differences
- Habitat-sim focuses on 3D simulation for embodied AI, while Baselines provides implementations of RL algorithms
- Habitat-sim uses C++ for performance, Baselines is Python-based for accessibility
- Habitat-sim offers more detailed control over 3D environments, Baselines is more general-purpose for various RL tasks
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
Habitat-Sim
A high-performance physics-enabled 3D simulator with support for:
- 3D scans of indoor/outdoor spaces (with built-in support for HM3D, MatterPort3D, Gibson, Replica, and other datasets)
- CAD models of spaces and piecewise-rigid objects (e.g. ReplicaCAD, YCB, Google Scanned Objects),
- Configurable sensors (RGB-D cameras, egomotion sensing)
- Robots described via URDF (mobile manipulators like Fetch, fixed-base arms like Franka, quadrupeds like AlienGo),
- Rigid-body mechanics (via Bullet).
The design philosophy of Habitat is to prioritize simulation speed over the breadth of simulation capabilities. When rendering a scene from the Matterport3D dataset, Habitat-Sim achieves several thousand frames per second (FPS) running single-threaded and reaches over 10,000 FPS multi-process on a single GPU. Habitat-Sim simulates a Fetch robot interacting in ReplicaCAD scenes at over 8,000 steps per second (SPS), where each âstepâ involves rendering 1 RGBD observation (128Ã128 pixels) and rigid-body dynamics for 1/30sec.
Habitat-Sim is typically used with Habitat-Lab, a modular high-level library for end-to-end experiments in embodied AI -- defining embodied AI tasks (e.g. navigation, instruction following, question answering), training agents (via imitation or reinforcement learning, or no learning at all as in classical SensePlanAct pipelines), and benchmarking their performance on the defined tasks using standard metrics.
Questions or Comments? Join the AI Habitat community discussions forum.
https://user-images.githubusercontent.com/2941091/126080914-36dc8045-01d4-4a68-8c2e-74d0bca1b9b8.mp4
Table of contents
Citing Habitat
If you use the Habitat platform in your research, please cite the Habitat 1.0, Habitat 2.0, and Habitat 3.0 papers:
@misc{puig2023habitat3,
title = {Habitat 3.0: A Co-Habitat for Humans, Avatars and Robots},
author = {Xavi Puig and Eric Undersander and Andrew Szot and Mikael Dallaire Cote and Ruslan Partsey and Jimmy Yang and Ruta Desai and Alexander William Clegg and Michal Hlavac and Tiffany Min and Theo Gervet and VladimiÌr VondrusÌ and Vincent-Pierre Berges and John Turner and Oleksandr Maksymets and Zsolt Kira and Mrinal Kalakrishnan and Jitendra Malik and Devendra Singh Chaplot and Unnat Jain and Dhruv Batra and Akshara Rai and Roozbeh Mottaghi},
year={2023},
archivePrefix={arXiv},
}
@inproceedings{szot2021habitat,
title = {Habitat 2.0: Training Home Assistants to Rearrange their Habitat},
author = {Andrew Szot and Alex Clegg and Eric Undersander and Erik Wijmans and Yili Zhao and John Turner and Noah Maestre and Mustafa Mukadam and Devendra Chaplot and Oleksandr Maksymets and Aaron Gokaslan and Vladimir Vondrus and Sameer Dharur and Franziska Meier and Wojciech Galuba and Angel Chang and Zsolt Kira and Vladlen Koltun and Jitendra Malik and Manolis Savva and Dhruv Batra},
booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
year = {2021}
}
@inproceedings{habitat19iccv,
title = {Habitat: {A} {P}latform for {E}mbodied {AI} {R}esearch},
author = {Manolis Savva and Abhishek Kadian and Oleksandr Maksymets and Yili Zhao and Erik Wijmans and Bhavana Jain and Julian Straub and Jia Liu and Vladlen Koltun and Jitendra Malik and Devi Parikh and Dhruv Batra},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
year = {2019}
}
Habitat-Sim also builds on work contributed by others. If you use contributed methods/models, please cite their works. See the External Contributions section for a list of what was externally contributed and the corresponding work/citation.
Installation
Habitat-Sim can be installed in 3 ways:
- Via Conda - Recommended method for most users. Stable release and nightly builds.
- [Experimental] Via PIP -
pip install .
to compile the latest headless build with Bullet. Read build instructions and common build issues. - Via Docker - Updated approximately once per year for the Habitat Challenge. Read habitat-docker-setup.
- Via Source - For active development. Read build instructions and common build issues.
[Recommended] Conda Packages
Habitat is under active development, and we advise users to restrict themselves to stable releases. Starting with v0.1.4, we provide conda packages for each release.
-
Preparing conda env
Assuming you have conda installed, let's prepare a conda env:
# We require python>=3.9 and cmake>=3.10 conda create -n habitat python=3.9 cmake=3.14.0 conda activate habitat
-
conda install habitat-sim
Pick one of the options below depending on your system/needs:
-
To install on machines with an attached display:
conda install habitat-sim -c conda-forge -c aihabitat
-
To install on headless machines (i.e. without an attached display, e.g. in a cluster) and machines with multiple GPUs (this parameter relies on EGL and thus does not work on MacOS):
conda install habitat-sim headless -c conda-forge -c aihabitat
-
[Most common scenario] To install habitat-sim with bullet physics
conda install habitat-sim withbullet -c conda-forge -c aihabitat
-
Note: Build parameters can be chained together. For instance, to install habitat-sim with physics on headless machines:
conda install habitat-sim withbullet headless -c conda-forge -c aihabitat
-
Conda packages for older versions can installed by explicitly specifying the version, e.g. conda install habitat-sim=0.1.6 -c conda-forge -c aihabitat
.
We also provide a nightly conda build for the main branch. However, this should only be used if you need a specific feature not yet in the latest release version. To get the nightly build of the latest main, simply swap -c aihabitat
for -c aihabitat-nightly
.
Testing
-
Let's download some 3D assets using our python data download utility:
-
Download (testing) 3D scenes
python -m habitat_sim.utils.datasets_download --uids habitat_test_scenes --data-path /path/to/data/
Note that these testing scenes do not provide semantic annotations. If you would like to test the semantic sensors via
example.py
, please use the data from the Matterport3D dataset (see Datasets). -
Download example objects
python -m habitat_sim.utils.datasets_download --uids habitat_example_objects --data-path /path/to/data/
-
-
Interactive testing: Use the interactive viewer included with Habitat-Sim in either C++ or python:
#C++ # ./build/viewer if compiling locally habitat-viewer /path/to/data/scene_datasets/habitat-test-scenes/skokloster-castle.glb #Python #NOTE: depending on your choice of installation, you may need to add '/path/to/habitat-sim' to your PYTHONPATH. #e.g. from 'habitat-sim/' directory run 'export PYTHONPATH=$(pwd)' python examples/viewer.py --scene /path/to/data/scene_datasets/habitat-test-scenes/skokloster-castle.glb
You should be able to control an agent in this test scene. Use W/A/S/D keys to move forward/left/backward/right and arrow keys or mouse (LEFT click) to control gaze direction (look up/down/left/right). Try to find the picture of a woman surrounded by a wreath. Have fun!
-
Physical interactions: Habitat-sim provides rigid and articulated dynamics simulation via integration with Bullet physics. Try it out now with our interactive viewer functionality in C++ or python.
First, download our fully interactive ReplicaCAD apartment dataset (140 MB):
#NOTE: by default, data will be downloaded into habitat-sim/data/. Optionally modify the data path by adding: `--data-path /path/to/data/` # with conda install python -m habitat_sim.utils.datasets_download --uids replica_cad_dataset # with source (from inside habitat_sim/) python src_python/habitat_sim/utils/datasets_download.py --uids replica_cad_dataset
- Alternatively, 105 scene variations with pre-baked lighting are available via
--uids replica_cad_baked_lighting
(480 MB).
Then load a ReplicaCAD scene in the viewer application with physics enabled. If you modified the data path above, also modify it in viewer calls below.
#C++ # ./build/viewer if compiling locally habitat-viewer --enable-physics --dataset data/replica_cad/replicaCAD.scene_dataset_config.json -- apt_1 #python #NOTE: habitat-sim/ directory must be on your `PYTHONPATH` python examples/viewer.py --dataset data/replica_cad/replicaCAD.scene_dataset_config.json --scene apt_1
- Using scenes with pre-baked lighting instead? Use
--dataset data/replica_cad_baked_lighting/replicaCAD_baked.scene_dataset_config.json --scene Baked_sc1_staging_00
The viewer application outputs the full list of keyboard and mouse interface options to the console at runtime.
Quickstart Example:
WASD
to moveLEFT
click and drag the mouse to look around- press
SPACE
to toggle simulation off/on (default on) - press
'm'
to switch to "GRAB" mouse mode - now
LEFT
orRIGHT
click and drag to move objects or open doors/drawers and release to drop the object - with an object gripped, scroll the mouse wheel to:
- (default): move it closer or farther away
- (+
ALT
): rotate object fixed constraint frame (yaw) - (+
CTRL
): rotate object fixed constraint frame (pitch) - (+
ALT
+CTRL
): rotate object fixed constraint frame (roll)
- Alternatively, 105 scene variations with pre-baked lighting are available via
-
Non-interactive testing (e.g. for headless systems): Run the example script:
python /path/to/habitat-sim/examples/example.py --scene /path/to/data/scene_datasets/habitat-test-scenes/skokloster-castle.glb
The agent will traverse a particular path and you should see the performance stats at the very end, something like this:
640 x 480, total time: 3.208 sec. FPS: 311.7
.To reproduce the benchmark table from Habitat ICCV'19 run
examples/benchmark.py --scene /path/to/mp3d_example/17DRP5sb8fy/17DRP5sb8fy.glb
.Additional arguments to
example.py
are provided to change the sensor configuration, print statistics of the semantic annotations in a scene, compute action-space shortest path trajectories, and set other useful functionality. Refer to theexample.py
anddemo_runner.py
source files for an overview.Load a specific MP3D or Gibson house:
examples/example.py --scene path/to/mp3d/house_id.glb
.We have also provided an example demo for reference.
To run a physics example in python (after building with "Physics simulation via Bullet"):
python examples/example.py --scene /path/to/data/scene_datasets/habitat-test-scenes/skokloster-castle.glb --enable_physics
Note that in this mode the agent will be frozen and oriented toward the spawned physical objects. Additionally,
--save_png
can be used to output agent visual observation frames of the physical scene to the current directory.
Common testing issues
-
If you are running on a remote machine and experience display errors when initializing the simulator, e.g.
X11: The DISPLAY environment variable is missing Could not initialize GLFW
ensure you do not have
DISPLAY
defined in your environment (rununset DISPLAY
to undefine the variable) -
If you see libGL errors like:
X11: The DISPLAY environment variable is missing Could not initialize GLFW
chances are your libGL is located at a non-standard location. See e.g. this issue.
Documentation
Browse the online Habitat-Sim documentation.
Check out our ECCV tutorial series for a hands-on quickstart experience.
Can't find the answer to your question? Try asking the developers and community on our Discussions forum.
Datasets
HowTo use common supported datasets with Habitat-Sim.
External Contributions
-
If you use the noise model from PyRobot, please cite the their technical report.
Specifically, the noise model used for the noisy control functions named
pyrobot_*
and defined insrc_python/habitat_sim/agent/controls/pyrobot_noisy_controls.py
-
If you use the Redwood Depth Noise Model, please cite their paper
Specifically, the noise model defined in
src_python/habitat_sim/sensors/noise_models/redwood_depth_noise_model.py
andsrc/esp/sensor/RedwoodNoiseModel.*
License
Habitat-Sim is MIT licensed. See the LICENSE for details.
The WebGL demo and demo scripts use:
- The King´s Hall by Skokloster Castle (Skoklosters slott) licensed under Creative Commons Attribution
- Van Gogh Room by ruslans3d licensed under Creative Commons Attribution
Top Related Projects
A modular high-level library to train embodied AI agents across a variety of tasks and environments.
A toolkit for developing and comparing reinforcement learning algorithms.
The Unity Machine Learning Agents Toolkit (ML-Agents) is an open-source project that enables games and simulations to serve as environments for training intelligent agents using deep reinforcement learning and imitation learning.
Google DeepMind's software stack for physics-based simulation and Reinforcement Learning environments, using MuJoCo.
OpenAI Baselines: high-quality implementations of reinforcement learning algorithms
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot