Convert Figma logo to code with AI

facebookresearch logohabitat-lab

A modular high-level library to train embodied AI agents across a variety of tasks and environments.

1,893
475
1,893
317

Top Related Projects

A flexible, high-performance 3D simulator for Embodied AI research.

34,461

A toolkit for developing and comparing reinforcement learning algorithms.

16,887

The Unity Machine Learning Agents Toolkit (ML-Agents) is an open-source project that enables games and simulations to serve as environments for training intelligent agents using deep reinforcement learning and imitation learning.

Google DeepMind's software stack for physics-based simulation and Reinforcement Learning environments, using MuJoCo.

15,630

OpenAI Baselines: high-quality implementations of reinforcement learning algorithms

Quick Overview

Habitat-Lab is an AI research framework for embodied AI tasks, developed by Facebook Research. It provides a flexible, high-performance platform for training and evaluating AI agents in 3D environments, focusing on tasks like navigation, instruction following, and object manipulation.

Pros

  • Highly modular and extensible architecture
  • Supports a wide range of embodied AI tasks and environments
  • Efficient simulation with high frame rates
  • Integrates well with popular deep learning frameworks like PyTorch

Cons

  • Steep learning curve for beginners
  • Limited documentation for advanced features
  • Requires significant computational resources for large-scale experiments
  • Some features are still in experimental stages

Code Examples

  1. Creating a simple navigation agent:
import habitat

env = habitat.Env(
    config=habitat.get_config("benchmark/nav/pointnav/pointnav_habitat_test.yaml")
)

observations = env.reset()
while not env.episode_over:
    action = env.action_space.sample()
    observations, reward, done, info = env.step(action)
    
env.close()
  1. Loading a custom scene:
import habitat

config = habitat.get_config("path/to/config.yaml")
config.defrost()
config.DATASET.SCENES_DIR = "path/to/custom/scenes"
config.DATASET.SCENE_DATASETS = ["custom_dataset.json.gz"]
config.freeze()

env = habitat.Env(config=config)
  1. Using the Habitat-Lab simulator with PyTorch:
import torch
import habitat

env = habitat.Env(config=habitat.get_config("path/to/config.yaml"))
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")

obs = env.reset()
rgb_tensor = torch.from_numpy(obs["rgb"]).to(device)
depth_tensor = torch.from_numpy(obs["depth"]).to(device)

# Process tensors with your PyTorch model

Getting Started

  1. Install Habitat-Lab:
conda create -n habitat python=3.7 cmake=3.14.0
conda activate habitat
pip install habitat-lab
  1. Download test data:
python -m habitat_sim.utils.datasets_download --uids habitat_test_scenes
  1. Run a simple example:
import habitat
env = habitat.Env(config=habitat.get_config("benchmark/nav/pointnav/pointnav_habitat_test.yaml"))
observations = env.reset()

Competitor Comparisons

A flexible, high-performance 3D simulator for Embodied AI research.

Pros of Habitat-sim

  • Faster simulation speed and better performance for large-scale environments
  • More detailed physics simulation and rendering capabilities
  • Lower-level control over the simulation environment

Cons of Habitat-sim

  • Steeper learning curve due to its lower-level nature
  • Less integrated with high-level AI/ML frameworks
  • Requires more manual setup for complex scenarios

Code Comparison

Habitat-sim (C++):

auto sim = habitat::Simulator(config);
auto agent = sim->getAgent(0);
agent->act(habitat::ActionSpec::create("move_forward"));

Habitat-lab (Python):

env = habitat.Env(config=config)
obs = env.reset()
action = {"action": "MOVE_FORWARD"}
obs, reward, done, info = env.step(action)

Habitat-sim provides lower-level control and is implemented in C++, while Habitat-lab offers a higher-level Python interface for easier integration with AI/ML workflows. Habitat-sim is better suited for performance-critical applications, while Habitat-lab is more accessible for rapid prototyping and research.

34,461

A toolkit for developing and comparing reinforcement learning algorithms.

Pros of Gym

  • Broader scope, supporting a wide range of environments beyond just robotics and embodied AI
  • Larger community and ecosystem with many third-party environments
  • Simpler API, making it easier for beginners to get started

Cons of Gym

  • Less focused on realistic 3D environments and embodied AI tasks
  • Lacks built-in support for advanced features like physics simulation and photorealistic rendering
  • Not optimized for large-scale distributed training

Code Comparison

Gym:

import gym
env = gym.make('CartPole-v1')
observation = env.reset()
for _ in range(1000):
    action = env.action_space.sample()
    observation, reward, done, info = env.step(action)

Habitat-lab:

import habitat
config = habitat.get_config('benchmark/nav/pointnav/pointnav_gibson.yaml')
env = habitat.Env(config=config)
observations = env.reset()
for _ in range(1000):
    action = env.action_space.sample()
    observations, reward, done, info = env.step(action)

Both repositories provide reinforcement learning environments, but Habitat-lab focuses on embodied AI tasks in realistic 3D environments, while Gym offers a more general-purpose toolkit for various RL problems. Habitat-lab provides more advanced features for 3D navigation and interaction, while Gym's simplicity makes it more accessible for a wider range of applications.

16,887

The Unity Machine Learning Agents Toolkit (ML-Agents) is an open-source project that enables games and simulations to serve as environments for training intelligent agents using deep reinforcement learning and imitation learning.

Pros of ml-agents

  • Seamless integration with Unity game engine, allowing for easy development of 3D environments
  • Supports a wide range of learning algorithms, including PPO, SAC, and POCA
  • Extensive documentation and tutorials for beginners and advanced users

Cons of ml-agents

  • Limited to Unity-based environments, less flexible for custom simulations
  • May have a steeper learning curve for those unfamiliar with Unity
  • Performance can be slower compared to specialized simulation frameworks

Code Comparison

ml-agents:

from mlagents_envs.environment import UnityEnvironment
from mlagents_envs.side_channel.engine_configuration_channel import EngineConfigurationChannel

channel = EngineConfigurationChannel()
env = UnityEnvironment(file_name="MyEnvironment", side_channels=[channel])

habitat-lab:

import habitat
from habitat.sims import make_sim

config = habitat.get_config("configs/tasks/pointnav.yaml")
sim = make_sim(config.SIMULATOR.TYPE, config=config.SIMULATOR)

Both repositories provide powerful tools for reinforcement learning in simulated environments. ml-agents excels in Unity-based game development scenarios, while habitat-lab offers more flexibility for custom simulations and research-oriented tasks.

Google DeepMind's software stack for physics-based simulation and Reinforcement Learning environments, using MuJoCo.

Pros of dm_control

  • More focused on physics-based control tasks and robotics simulations
  • Offers a wider range of pre-built environments and tasks
  • Tighter integration with DeepMind's reinforcement learning libraries

Cons of dm_control

  • Less emphasis on photorealistic 3D environments
  • Limited support for navigation and embodied AI tasks
  • Smaller community and ecosystem compared to Habitat-Lab

Code Comparison

dm_control:

from dm_control import suite
env = suite.load(domain_name="cartpole", task_name="swingup")
timestep = env.reset()
action = env.action_spec().generate_value()
next_timestep = env.step(action)

Habitat-Lab:

import habitat
env = habitat.Env(config=habitat.get_config("benchmark/nav/pointnav/pointnav_gibson.yaml"))
observations = env.reset()
action = env.action_space.sample()
observations, reward, done, info = env.step(action)

Both repositories provide environments for reinforcement learning research, but they focus on different aspects. dm_control specializes in physics-based control tasks, while Habitat-Lab emphasizes 3D navigation and embodied AI in realistic environments. The code examples show how to create and interact with environments in each framework, highlighting their distinct approaches to task definition and environment interaction.

15,630

OpenAI Baselines: high-quality implementations of reinforcement learning algorithms

Pros of baselines

  • Broader focus on reinforcement learning algorithms across various environments
  • Well-established and widely used in the RL research community
  • Includes implementations of popular RL algorithms like DQN, PPO, and A2C

Cons of baselines

  • Less focused on embodied AI and 3D environments
  • May require more setup and configuration for specific tasks
  • Documentation could be more comprehensive for some algorithms

Code comparison

habitat-lab:

import habitat
env = habitat.Env(
    config=habitat.get_config("configs/tasks/pointnav.yaml")
)
observations = env.reset()

baselines:

import gym
from baselines import deepq
env = gym.make("CartPole-v0")
model = deepq.learn(env, network='mlp', total_timesteps=100000)

Summary

habitat-lab focuses on embodied AI tasks in 3D environments, while baselines provides a broader set of RL algorithms for various environments. habitat-lab offers more specialized tools for navigation and interaction in realistic 3D spaces, whereas baselines is better suited for general RL research across different domains. The choice between them depends on the specific research goals and the type of environments being studied.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

CircleCI codecov GitHub license GitHub release (latest by date) Supports Habitat_Sim Python 3.9 pre-commit Code style: black Imports: isort Twitter Follow

Habitat-Lab

Habitat-Lab is a modular high-level library for end-to-end development in embodied AI. It is designed to train agents to perform a wide variety of embodied AI tasks in indoor environments, as well as develop agents that can interact with humans in performing these tasks.

Towards this goal, Habitat-Lab is designed to support the following features:

  • Flexible task definitions: allowing users to train agents in a wide variety of single and multi-agent tasks (e.g. navigation, rearrangement, instruction following, question answering, human following), as well as define novel tasks.
  • Diverse embodied agents: configuring and instantiating a diverse set of embodied agents, including commercial robots and humanoids, specifying their sensors and capabilities.
  • Training and evaluating agents: providing algorithms for single and multi-agent training (via imitation or reinforcement learning, or no learning at all as in SensePlanAct pipelines), as well as tools to benchmark their performance on the defined tasks using standard metrics.
  • Human in the loop interaction: providing a framework for humans to interact with the simulator, enabling to collect embodied data or interact with trained agents.

Habitat-Lab uses Habitat-Sim as the core simulator. For documentation refer here.

Habitat Demo


Table of contents

Citing Habitat

If you use the Habitat platform in your research, please cite the Habitat 1.0, Habitat 2.0, and Habitat 3.0 papers:

@misc{puig2023habitat3,
      title  = {Habitat 3.0: A Co-Habitat for Humans, Avatars and Robots},
      author = {Xavi Puig and Eric Undersander and Andrew Szot and Mikael Dallaire Cote and Ruslan Partsey and Jimmy Yang and Ruta Desai and Alexander William Clegg and Michal Hlavac and Tiffany Min and Theo Gervet and Vladimír Vondruš and Vincent-Pierre Berges and John Turner and Oleksandr Maksymets and Zsolt Kira and Mrinal Kalakrishnan and Jitendra Malik and Devendra Singh Chaplot and Unnat Jain and Dhruv Batra and Akshara Rai and Roozbeh Mottaghi},
      year={2023},
      archivePrefix={arXiv},
}

@inproceedings{szot2021habitat,
  title     =     {Habitat 2.0: Training Home Assistants to Rearrange their Habitat},
  author    =     {Andrew Szot and Alex Clegg and Eric Undersander and Erik Wijmans and Yili Zhao and John Turner and Noah Maestre and Mustafa Mukadam and Devendra Chaplot and Oleksandr Maksymets and Aaron Gokaslan and Vladimir Vondrus and Sameer Dharur and Franziska Meier and Wojciech Galuba and Angel Chang and Zsolt Kira and Vladlen Koltun and Jitendra Malik and Manolis Savva and Dhruv Batra},
  booktitle =     {Advances in Neural Information Processing Systems (NeurIPS)},
  year      =     {2021}
}

@inproceedings{habitat19iccv,
  title     =     {Habitat: {A} {P}latform for {E}mbodied {AI} {R}esearch},
  author    =     {Manolis Savva and Abhishek Kadian and Oleksandr Maksymets and Yili Zhao and Erik Wijmans and Bhavana Jain and Julian Straub and Jia Liu and Vladlen Koltun and Jitendra Malik and Devi Parikh and Dhruv Batra},
  booktitle =     {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
  year      =     {2019}
}

Installation

  1. Preparing conda env

    Assuming you have conda installed, let's prepare a conda env:

    # We require python>=3.9 and cmake>=3.14
    conda create -n habitat python=3.9 cmake=3.14.0
    conda activate habitat
    
  2. conda install habitat-sim

    • To install habitat-sim with bullet physics
      conda install habitat-sim withbullet -c conda-forge -c aihabitat
      
      Note, for newer features added after the most recent release, you may need to install aihabitat-nightly. See Habitat-Sim's installation instructions for more details.
  3. pip install habitat-lab stable version.

    git clone --branch stable https://github.com/facebookresearch/habitat-lab.git
    cd habitat-lab
    pip install -e habitat-lab  # install habitat_lab
    
  4. Install habitat-baselines.

    The command above will install only core of Habitat-Lab. To include habitat_baselines along with all additional requirements, use the command below after installing habitat-lab:

    pip install -e habitat-baselines  # install habitat_baselines
    

Testing

  1. Let's download some 3D assets using Habitat-Sim's python data download utility:

    • Download (testing) 3D scenes:

      python -m habitat_sim.utils.datasets_download --uids habitat_test_scenes --data-path data/
      

      Note that these testing scenes do not provide semantic annotations.

    • Download point-goal navigation episodes for the test scenes:

      python -m habitat_sim.utils.datasets_download --uids habitat_test_pointnav_dataset --data-path data/
      
  2. Non-interactive testing: Test the Pick task: Run the example pick task script

    python examples/example.py
    

    which uses habitat-lab/habitat/config/benchmark/rearrange/skills/pick.yaml for configuration of task and agent. The script roughly does this:

    import gym
    import habitat.gym
    
    # Load embodied AI task (RearrangePick) and a pre-specified virtual robot
    env = gym.make("HabitatRenderPick-v0")
    observations = env.reset()
    
    terminal = False
    
    # Step through environment with random actions
    while not terminal:
        observations, reward, terminal, info = env.step(env.action_space.sample())
    

    To modify some of the configurations of the environment, you can also use the habitat.gym.make_gym_from_config method that allows you to create a habitat environment using a configuration.

    config = habitat.get_config(
      "benchmark/rearrange/skills/pick.yaml",
      overrides=["habitat.environment.max_episode_steps=20"]
    )
    env = habitat.gym.make_gym_from_config(config)
    

    If you want to know more about what the different configuration keys overrides do, you can use this reference.

    See examples/register_new_sensors_and_measures.py for an example of how to extend habitat-lab from outside the source code.

  3. Interactive testing: Using you keyboard and mouse to control a Fetch robot in a ReplicaCAD environment:

    # Pygame for interactive visualization, pybullet for inverse kinematics
    pip install pygame==2.0.1 pybullet==3.0.4
    
    # Interactive play script
    python examples/interactive_play.py --never-end
    

    Use I/J/K/L keys to move the robot base forward/left/backward/right and W/A/S/D to move the arm end-effector forward/left/backward/right and E/Q to move the arm up/down. The arm can be difficult to control via end-effector control. More details in documentation. Try to move the base and the arm to touch the red bowl on the table. Have fun!

    Note: Interactive testing currently fails on Ubuntu 20.04 with an error: X Error of failed request: BadAccess (attempt to access private resource denied). We are working on fixing this, and will update instructions once we have a fix. The script works without errors on MacOS.

Debugging an environment issue

Our vectorized environments are very fast, but they are not very verbose. When using VectorEnv some errors may be silenced, resulting in process hanging or multiprocessing errors that are hard to interpret. We recommend setting the environment variable HABITAT_ENV_DEBUG to 1 when debugging (export HABITAT_ENV_DEBUG=1) as this will use the slower, but more verbose ThreadedVectorEnv class. Do not forget to reset HABITAT_ENV_DEBUG (unset HABITAT_ENV_DEBUG) when you are done debugging since VectorEnv is much faster than ThreadedVectorEnv.

Documentation

Browse the online Habitat-Lab documentation and the extensive tutorial on how to train your agents with Habitat. For Habitat 2.0, use this quickstart guide.

Docker Setup

We provide docker containers for Habitat, updated approximately once per year for the Habitat Challenge. This works on machines with an NVIDIA GPU and requires users to install nvidia-docker. To setup the habitat stack using docker follow the below steps:

  1. Pull the habitat docker image: docker pull fairembodied/habitat-challenge:testing_2022_habitat_base_docker

  2. Start an interactive bash session inside the habitat docker: docker run --runtime=nvidia -it fairembodied/habitat-challenge:testing_2022_habitat_base_docker

  3. Activate the habitat conda environment: conda init; source ~/.bashrc; source activate habitat

  4. Run the testing scripts as above: cd habitat-lab; python examples/example.py. This should print out an output like:

    Agent acting inside environment.
    Episode finished after 200 steps.
    

Questions?

Can't find the answer to your question? Look up for common issues or try asking the developers and community on our Discussions forum.

Datasets

Common task and episode datasets used with Habitat-Lab.

Baselines

Habitat-Lab includes reinforcement learning (via PPO) baselines. For running PPO training on sample data and more details refer habitat_baselines/README.md.

ROS-X-Habitat

ROS-X-Habitat (https://github.com/ericchen321/ros_x_habitat) is a framework that bridges the AI Habitat platform (Habitat Lab + Habitat Sim) with other robotics resources via ROS. Compared with Habitat-PyRobot, ROS-X-Habitat places emphasis on 1) leveraging Habitat Sim v2's physics-based simulation capability and 2) allowing roboticists to access simulation assets from ROS. The work has also been made public as a paper.

Note that ROS-X-Habitat was developed, and is maintained by the Lab for Computational Intelligence at UBC; it has not yet been officially supported by the Habitat Lab team. Please refer to the framework's repository for docs and discussions.

License

Habitat-Lab is MIT licensed. See the LICENSE file for details.

The trained models and the task datasets are considered data derived from the correspondent scene datasets.