Convert Figma logo to code with AI

openai logoroboschool

DEPRECATED: Open-source software for robot simulation, integrated with OpenAI Gym.

2,115
487
2,115
83

Top Related Projects

Google DeepMind's software stack for physics-based simulation and Reinforcement Learning environments, using MuJoCo.

12,540

Bullet Physics SDK: real-time collision detection and multi-physics simulation for VR, games, visual effects, robotics, machine learning etc.

34,643

A toolkit for developing and comparing reinforcement learning algorithms.

Check out the new game server:

17,044

The Unity Machine Learning Agents Toolkit (ML-Agents) is an open-source project that enables games and simulations to serve as environments for training intelligent agents using deep reinforcement learning and imitation learning.

Quick Overview

Roboschool is an open-source physics simulator for robotics and reinforcement learning, developed by OpenAI. It provides a set of environments and robot models for training and testing AI agents in various tasks, such as locomotion, manipulation, and control.

Pros

  • Offers a wide range of robotics environments and tasks for reinforcement learning
  • Integrates with OpenAI Gym, making it easy to use with existing RL algorithms
  • Provides realistic physics simulations for more accurate training
  • Open-source and customizable, allowing researchers to modify and extend environments

Cons

  • No longer actively maintained by OpenAI (last update in 2019)
  • Limited documentation and community support compared to newer alternatives
  • May have compatibility issues with newer versions of dependencies
  • Performance can be slower compared to more recent physics simulators

Code Examples

  1. Creating a Roboschool environment:
import gym
import roboschool

env = gym.make('RoboschoolHumanoid-v1')
observation = env.reset()
  1. Running a simple episode:
for _ in range(1000):
    action = env.action_space.sample()  # Random action
    observation, reward, done, info = env.step(action)
    if done:
        observation = env.reset()
  1. Rendering the environment:
env = gym.make('RoboschoolHumanoid-v1')
env.render()
observation = env.reset()

for _ in range(1000):
    env.render()
    action = env.action_space.sample()
    observation, reward, done, info = env.step(action)
    if done:
        observation = env.reset()
env.close()

Getting Started

To get started with Roboschool, follow these steps:

  1. Install Roboschool and its dependencies:
pip install gym
pip install roboschool
  1. Import the necessary libraries and create an environment:
import gym
import roboschool

env = gym.make('RoboschoolHumanoid-v1')
observation = env.reset()
  1. Implement your reinforcement learning algorithm or use existing ones from libraries like Stable Baselines or RLlib to train agents in the Roboschool environments.

Note: As Roboschool is no longer actively maintained, consider using newer alternatives like PyBullet or MuJoCo for more up-to-date robotics simulations.

Competitor Comparisons

Google DeepMind's software stack for physics-based simulation and Reinforcement Learning environments, using MuJoCo.

Pros of dm_control

  • More advanced physics engine (MuJoCo) with better stability and realism
  • Wider variety of pre-built environments and tasks
  • Better documentation and examples for getting started

Cons of dm_control

  • Requires a MuJoCo license, which can be costly for commercial use
  • Steeper learning curve due to more complex API and environment setup
  • Less integration with popular RL frameworks compared to Roboschool

Code Comparison

dm_control example:

from dm_control import suite
env = suite.load(domain_name="cartpole", task_name="swingup")
timestep = env.reset()
action = env.action_spec().generate_value()
next_timestep = env.step(action)

Roboschool example:

import gym
import roboschool
env = gym.make('RoboschoolCartPole-v1')
observation = env.reset()
action = env.action_space.sample()
observation, reward, done, info = env.step(action)

Both libraries provide similar functionality for creating and interacting with reinforcement learning environments. However, dm_control offers more advanced features and a wider range of pre-built environments, while Roboschool focuses on simplicity and ease of use with OpenAI Gym integration.

12,540

Bullet Physics SDK: real-time collision detection and multi-physics simulation for VR, games, visual effects, robotics, machine learning etc.

Pros of Bullet3

  • More comprehensive physics simulation capabilities
  • Actively maintained with frequent updates
  • Wider community support and extensive documentation

Cons of Bullet3

  • Steeper learning curve for beginners
  • Requires more setup and configuration for basic simulations

Code Comparison

Roboschool (Python):

env = gym.make('RoboschoolHumanoid-v1')
observation = env.reset()
for _ in range(1000):
    action = env.action_space.sample()
    observation, reward, done, info = env.step(action)

Bullet3 (C++):

btDiscreteDynamicsWorld* dynamicsWorld = new btDiscreteDynamicsWorld(...);
btRigidBody* body = new btRigidBody(constructionInfo);
dynamicsWorld->addRigidBody(body);
for (int i = 0; i < 1000; i++) {
    dynamicsWorld->stepSimulation(1.f/60.f, 10);
}

Summary

Bullet3 offers more advanced physics simulation capabilities and broader community support compared to Roboschool. However, it may be more challenging for beginners to use. Roboschool provides a simpler interface for reinforcement learning tasks, while Bullet3 offers more flexibility and control over the simulation environment. The choice between the two depends on the specific requirements of your project and your level of expertise in physics simulations.

34,643

A toolkit for developing and comparing reinforcement learning algorithms.

Pros of Gym

  • Wider variety of environments, including classic control, atari games, and more
  • More active development and community support
  • Better documentation and tutorials available

Cons of Gym

  • Less focus on robotics-specific environments
  • May require additional dependencies for certain environments

Code Comparison

Gym:

import gym
env = gym.make('CartPole-v1')
observation = env.reset()
for _ in range(1000):
    action = env.action_space.sample()
    observation, reward, done, info = env.step(action)

Roboschool:

import gym
import roboschool
env = gym.make('RoboschoolHumanoid-v1')
observation = env.reset()
for _ in range(1000):
    action = env.action_space.sample()
    observation, reward, done, info = env.step(action)

The code structure is similar, but Roboschool requires an additional import and uses different environment names. Gym offers a broader range of environments, while Roboschool focuses on robotics simulations. Gym is more actively maintained and has better community support, making it a more versatile choice for general reinforcement learning tasks. However, Roboschool may be preferred for specific robotics-related projects.

Check out the new game server:

Pros of Football

  • Focuses on a specific, complex multi-agent environment (soccer)
  • Provides a more realistic and challenging scenario for AI research
  • Offers built-in support for multi-agent reinforcement learning

Cons of Football

  • Limited to soccer-specific tasks, less versatile than Roboschool
  • May require more computational resources due to complexity
  • Steeper learning curve for researchers not familiar with soccer rules

Code Comparison

Football environment setup:

import gfootball.env as football_env
env = football_env.create_environment(env_name="11_vs_11_stochastic")

Roboschool environment setup:

import gym
import roboschool
env = gym.make('RoboschoolHumanoid-v1')

Both repositories provide reinforcement learning environments, but Football focuses on a specific domain (soccer) while Roboschool offers a variety of robotics-related tasks. Football is more suitable for researchers interested in multi-agent systems and complex team dynamics, while Roboschool provides a broader range of simpler environments for general robotics research.

17,044

The Unity Machine Learning Agents Toolkit (ML-Agents) is an open-source project that enables games and simulations to serve as environments for training intelligent agents using deep reinforcement learning and imitation learning.

Pros of ml-agents

  • More comprehensive and actively maintained
  • Integrates seamlessly with Unity game engine
  • Supports a wider range of learning algorithms and environments

Cons of ml-agents

  • Steeper learning curve for non-Unity developers
  • Requires Unity installation, which can be resource-intensive
  • Less focused on robotics simulation compared to Roboschool

Code Comparison

ml-agents:

from mlagents_envs.environment import UnityEnvironment
from mlagents_envs.side_channel.engine_configuration_channel import EngineConfigurationChannel

channel = EngineConfigurationChannel()
env = UnityEnvironment(file_name="MyEnvironment", side_channels=[channel])
channel.set_configuration_parameters(time_scale=2.0)

Roboschool:

import gym
import roboschool

env = gym.make('RoboschoolHumanoid-v1')
observation = env.reset()
for _ in range(1000):
    action = env.action_space.sample()
    observation, reward, done, info = env.step(action)

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

Status: Archive (code is provided as-is, no updates expected)

DEPRECATED: Please use PyBullet instead

NEWS

2019 September 27

  • We are deprecating Roboschool and now recommend using PyBullet instead.

2017 July 17, Version 1.1

  • All envs version bumped to “-v1", due to stronger stuck joint punishment, that improves odds of getting a good policy.
  • Flagrun-v1 is much more likely to develop a symmetric gait,
  • FlagrunHarder-v1 has new "repeat-underlearned" learning schedule, that allows it to be trained to stand up, walk and turn without falling.
  • Atlas robot model, modified (empty links removed, overly powerful feet weakaned).
  • All -v1 envs are shipped with better zoo policies, compared to May versions.
  • Keyboard-controlled humanoid included.

Roboschool

Release blog post is here:

https://blog.openai.com/roboschool/

Roboschool is a long-term project to create simulations useful for research. The roadmap is as follows:

  1. Replicate Gym MuJoCo environments.
  2. Take a step away from trajectory-centric fragile MuJoCo tasks.
  3. Explore multiplayer games.
  4. Create tasks with camera RGB image and joints in a tuple.
  5. Teach robots to follow commands, including verbal commands.

Some wiki pages:

Contributing New Environments

Help Wanted

Environments List

The list of Roboschool environments is as follows:

  • RoboschoolInvertedPendulum-v1
  • RoboschoolInvertedPendulumSwingup-v1
  • RoboschoolInvertedDoublePendulum-v1
  • RoboschoolReacher-v1
  • RoboschoolHopper-v1
  • RoboschoolWalker2d-v1
  • RoboschoolHalfCheetah-v1
  • RoboschoolAnt-v1
  • RoboschoolHumanoid-v1
  • RoboschoolHumanoidFlagrun-v1
  • RoboschoolHumanoidFlagrunHarder-v1
  • RoboschoolPong-v1

To obtain this list: import roboschool, gym; print("\n".join(['- ' + spec.id for spec in gym.envs.registry.all() if spec.id.startswith('Roboschool')])).

Basic prerequisites

Roboschool is compatible and tested with python3 (3.5 and 3.6), osx and linux. You may be able to compile it with python2.7 (see Installation from source), but that may require non-trivial amount of work.

Installation

If you are running Ubuntu or Debian Linux, or OS X, the easiest path to install roboschool is via pip (:

pip install roboschool

Note: in a headless machine (e.g. docker container) you may need to install graphics libraries; this can be achieved via apt-get install libgl1-mesa-dev

If you are running some other Linux/Unix distro, or want the latest and the greatest code, or want to tweak the compiler optimization options, read on...

Installation from source

Prerequisites

First, make sure you are installing from a github repo (not a source package on pypi). That is, clone this repo and cd into cloned folder:

git clone https://github.com/openai/roboschool && cd roboschool

The system-level dependencies of roboschool are qt5 (with opengl), boost-python3 (or boost-python if you are compiling with python2), assimp and cmake. Linux-based distros will need patchelf utility to tweak the runtime paths. Also, some version of graphics libraries is required. Qt5, assimp, cmake and patchelf are rather straightforward to install:

  • Ubuntu / Debian:

    sudo apt-get install qtbase5-dev libqt5opengl5-dev libassimp-dev cmake patchelf
    
  • OSX:

    brew install qt assimp cmake
    

Next, we'll need boost-python3. On osx brew install boost-python3 is usually sufficient, however, on linux it is not always available as a system-level package (sometimes it is available, but compiled against wrong version of python). If you are using anaconda/miniconda, boost-python3 can be installed via conda install boost. Otherwise, do we despair? Of course not! We install it from source! There is a script install_boost.sh that should do most of the heavy lifting - note that it will need sudo to install boost-python3 after compilation is done.

Next, need a custom version of bullet physics engine. In both osx and linux its installation is a little involved, fortunately, there is a helper script install_bullet.sh that should do it for you. Finally, we also need to set up some environment variables (so that pkg-config knows where has the software been installed) - this can be done via sourcing exports.sh script

To summarize, all the prerequisites can be installed as follows:

  • Ubuntu / Debian:

    sudo apt-get install qtbase5 libqt5opengl5-dev libassimp-dev patchelf cmake
    ./install_boost.sh
    ./install_bullet.sh
    source exports.sh
    
  • Ubuntu / Debian with anaconda:

    sudo apt-get install qtbase5 libqt5opengl5-dev libassimp-dev patchelf cmake
    conda install boost
    ./install_bullet.sh
    source exports.sh
    
  • OSX:

    brew install qt assimp boost-python3 cmake
    ./install_bullet.sh
    source exports.sh
    

To check if that installation is successful, run pkg-config --cflags Qt5OpenGL assimp bullet - you should see something resembling compiler options and not an error message. Now we are ready to compile the roboschool project itself.

Compile and install

The compiler options are configured in the Makefile. Feel free to tinker with them or leave those as is. To compile the project code, and then install it as a python package, use the following:

cd roboschool/cpp-household && make clean && make -j4 && cd ../.. && pip install -e .

A simple check if resulting installation is valid:

import roboschool
import gym

env = gym.make('RoboschoolAnt-v1')
while True:
    env.step(env.action_space.sample())
    env.render()

You can also check the installation running a pretrained agent from the agent zoo, for instance:

python agent_zoo/RoboschoolHumanoidFlagrun_v0_2017may.py

Troubleshooting

A lot of the issues during installation from source are due to missing / incorrect PKG_CONFIG_PATH variable. If the command pkg-config --cflags Qt5OpenGL assimp bullet shows an error, you can try manually finding missing *.pc files (for instance, for if the pkg-config complains about assimp, run find / -name "assimp.pc" - this is a bit bruteforce, but it works :)) and then adding folder with that files to PKG_CONFIG_PATH.

Sometime distros of linux may complain about generated code being not platform-independent, and ask you to recompile something with -fPIC option (this was seen on older versions of CentOS). In that case, try removing -march=native compilation option in the Makefile.

On the systems with nvidia drivers present, roboschool sometimes is not be able to find hardware-accelerated libraries. If you see errors like

.build-release/render-ssao.o: In function `SimpleRender::ContextViewport::_depthlinear_paint(int)':
/home/peter/dev/roboschool/roboschool/cpp-household/render-ssao.cpp:75: undefined reference to `glBindMultiTextureEXT'
/home/peter/dev/roboschool/roboschool/cpp-household/render-ssao.cpp:78: undefined reference to `glBindMultiTextureEXT'
collect2: error: ld returned 1 exit status
Makefile:130: recipe for target '../robot-test-tool' failed

you can try disabling hardware rendering by setting ROBOSCHOOL_DISABLE_HARDWARE_RENDER env variable:

export ROBOSCHOOL_DISABLE_HARDWARE_RENDER=1

Agent Zoo

We have provided a number of pre-trained agents in the agent_zoo directory.

To see a humanoid run towards a random varying target:

python agent_zoo/RoboschoolHumanoidFlagrun_v0_2017may.py

To see three agents in a race:

python agent_zoo/demo_race2.py