Convert Figma logo to code with AI

IntelLabs logocoach

Reinforcement Learning Coach by Intel AI Lab enables easy experimentation with state of the art Reinforcement Learning algorithms

2,335
462
2,335
90

Top Related Projects

15,955

OpenAI Baselines: high-quality implementations of reinforcement learning algorithms

A fork of OpenAI Baselines, implementations of reinforcement learning algorithms

2,844

TF-Agents: A reliable, scalable and easy to use TensorFlow library for Contextual Bandits and Reinforcement Learning.

3,565

A library of reinforcement learning components and agents

1,215

PFRL: a PyTorch-based deep reinforcement learning library

1,910

A toolkit for reproducible reinforcement learning research.

Quick Overview

Coach is an open-source framework for reinforcement learning developed by Intel Labs. It provides a modular and extensible platform for training and evaluating reinforcement learning agents across various environments, including robotics, games, and other complex domains.

Pros

  • Comprehensive suite of reinforcement learning algorithms and architectures
  • Highly modular and extensible design, allowing easy integration of custom components
  • Support for both single-threaded and distributed training
  • Integration with popular environments like OpenAI Gym and Unity ML-Agents

Cons

  • Steep learning curve for beginners due to the framework's complexity
  • Limited documentation and examples for some advanced features
  • Dependency on specific versions of libraries, which may cause compatibility issues
  • Less active development and community support compared to some other RL frameworks

Code Examples

  1. Creating and training a DQN agent:
from rl_coach.agents.dqn_agent import DQNAgentParameters
from rl_coach.environments.gym_environment import GymVectorEnvironment
from rl_coach.graph_managers.basic_rl_graph_manager import BasicRLGraphManager

# Define the environment
env_params = GymVectorEnvironment(level='CartPole-v0')

# Create the agent parameters
agent_params = DQNAgentParameters()

# Create and run the graph manager
graph_manager = BasicRLGraphManager(agent_params=agent_params, env_params=env_params)
graph_manager.improve()
  1. Defining a custom environment:
from rl_coach.environments.environment import Environment, EnvironmentParameters
from rl_coach.spaces import DiscreteActionSpace, VectorObservationSpace

class MyCustomEnvironment(Environment):
    def __init__(self, env_params):
        super().__init__(env_params)
        self.state = None
        self.action_space = DiscreteActionSpace(num_actions=4)
        self.observation_space = VectorObservationSpace(shape=10)

    def reset(self):
        self.state = self.np_random.rand(10)
        return self.state

    def step(self, action):
        # Implement your environment logic here
        next_state = self.np_random.rand(10)
        reward = 0
        done = False
        return next_state, reward, done, {}
  1. Using a pre-trained model:
from rl_coach.agents.dqn_agent import DQNAgentParameters
from rl_coach.graph_managers.basic_rl_graph_manager import BasicRLGraphManager
from rl_coach.core_types import EnvironmentSteps

# Load pre-trained model
checkpoint_dir = 'path/to/checkpoint'
graph_manager = BasicRLGraphManager.load_from_checkpoint(checkpoint_dir)

# Run the model for 1000 steps
graph_manager.run(EnvironmentSteps(1000))

Getting Started

To get started with Coach, follow these steps:

  1. Install Coach and its dependencies:
pip install rl_coach
  1. Create a simple script to train an agent:
from rl_coach.agents.dqn_agent import DQNAgentParameters
from rl_coach.environments.gym_environment import GymVectorEnvironment
from rl_coach.graph_managers.basic_rl_graph_manager import BasicRLGraphManager

env_params = GymVectorEnvironment(level='CartPole-v0')
agent_params = DQNAgentParameters()
graph_manager = BasicRLGraphManager(agent_params=agent_params, env_params=env_params)
graph_manager.improve()
  1. Run the script to start training:
python your_script.py

This will train a DQN agent on the CartPole environment. You can modify the environment and agent parameters to experiment with different settings and algorithms.

Competitor Comparisons

15,955

OpenAI Baselines: high-quality implementations of reinforcement learning algorithms

Pros of Baselines

  • Wider range of implemented algorithms, including PPO, TRPO, and DDPG
  • More extensive documentation and examples for various environments
  • Active community support and regular updates

Cons of Baselines

  • Less modular architecture, making it harder to extend or modify
  • Steeper learning curve for beginners due to its complexity
  • Limited visualization tools for training progress

Code Comparison

Coach:

from rl_coach.agents.dqn_agent import DQNAgentParameters
from rl_coach.environments.gym_environment import GymEnvironment
from rl_coach.core_types import TrainingSteps, EnvironmentSteps

env = GymEnvironment(level='CartPole-v0')
agent = DQNAgentParameters()

graph_manager.create_graph(agent, env)
graph_manager.improve()

Baselines:

from baselines import deepq
from baselines.common.atari_wrappers import wrap_deepmind

env = wrap_deepmind('PongNoFrameskip-v4')
model = deepq.learn(env, network='conv_only', total_timesteps=100000)
model.save('pong_model.pkl')

Both repositories offer robust implementations of reinforcement learning algorithms, but they cater to different user needs. Coach provides a more modular and extensible framework, while Baselines offers a wider range of pre-implemented algorithms and better documentation for beginners.

A fork of OpenAI Baselines, implementations of reinforcement learning algorithms

Pros of stable-baselines

  • Simpler API and easier to use for beginners
  • Better documentation and more extensive examples
  • More active community and frequent updates

Cons of stable-baselines

  • Fewer algorithms implemented compared to Coach
  • Less flexibility for advanced users and custom implementations

Code Comparison

stable-baselines:

from stable_baselines3 import PPO

model = PPO("MlpPolicy", "CartPole-v1", verbose=1)
model.learn(total_timesteps=10000)

Coach:

from rl_coach.agents.ppo_agent import PPOAgentParameters
from rl_coach.environments.gym_environment import GymEnvironment
from rl_coach.core_types import TrainingSteps

env = GymEnvironment(level='CartPole-v1')
agent = PPOAgentParameters()
training_params = TrainingSteps(10000)

# Additional setup and run code required

Both repositories provide implementations of reinforcement learning algorithms, but stable-baselines offers a more straightforward API for quick experimentation, while Coach provides more flexibility for advanced users. stable-baselines has better documentation and a more active community, making it easier for beginners to get started. However, Coach implements a wider range of algorithms and allows for more customization in the training process.

2,844

TF-Agents: A reliable, scalable and easy to use TensorFlow library for Contextual Bandits and Reinforcement Learning.

Pros of agents

  • Tighter integration with TensorFlow ecosystem
  • More active development and community support
  • Extensive documentation and tutorials

Cons of agents

  • Steeper learning curve for beginners
  • Less flexibility in algorithm customization
  • Primarily focused on TensorFlow, limiting use with other frameworks

Code Comparison

agents:

import tensorflow as tf
from tf_agents.agents.dqn import dqn_agent
from tf_agents.networks import q_network
from tf_agents.environments import tf_py_environment

q_net = q_network.QNetwork(
    train_env.observation_spec(),
    train_env.action_spec(),
    fc_layer_params=(100,))

agent = dqn_agent.DqnAgent(
    train_env.time_step_spec(),
    train_env.action_spec(),
    q_network=q_net,
    optimizer=tf.compat.v1.train.AdamOptimizer(learning_rate=1e-3))

coach:

from rl_coach.agents.dqn_agent import DQNAgentParameters
from rl_coach.architectures.tensorflow_components.layers import Dense
from rl_coach.core_types import TrainingSteps, EnvironmentEpisodes
from rl_coach.environments.gym_environment import GymEnvironment

agent_params = DQNAgentParameters()
agent_params.network_wrappers['main'].learning_rate = 0.001
agent_params.network_wrappers['main'].middleware_parameters.scheme = [Dense(100)]

graph_manager.create_graph(agent_params, env_params=GymEnvironment(level='CartPole-v0'))
graph_manager.improve_steps(TrainingSteps(10000))
3,565

A library of reinforcement learning components and agents

Pros of Acme

  • More comprehensive and flexible framework for RL research
  • Better support for distributed training and multi-agent scenarios
  • More active development and community support

Cons of Acme

  • Steeper learning curve due to its complexity
  • Less focus on visualization tools compared to Coach
  • Requires more setup and configuration for simple experiments

Code Comparison

Coach example:

from rl_coach.agents.dqn_agent import DQNAgentParameters
from rl_coach.environments.gym_environment import GymEnvironment
from rl_coach.core_types import TrainingSteps, EnvironmentSteps

env = GymEnvironment(level='CartPole-v0')
agent = DQNAgentParameters()

graph_manager.create_graph(agent, env)
graph_manager.improve()

Acme example:

import acme
from acme import environment_loop
from acme import specs
from acme.agents import dqn

environment = gym.make('CartPole-v0')
environment_spec = specs.make_environment_spec(environment)
agent = dqn.DQN(environment_spec)

loop = environment_loop.EnvironmentLoop(environment, agent)
loop.run(num_episodes=10)

Both repositories provide powerful tools for reinforcement learning, but Acme offers more flexibility and scalability at the cost of increased complexity. Coach provides a more straightforward approach with built-in visualization tools, making it potentially more suitable for beginners or smaller-scale projects.

1,215

PFRL: a PyTorch-based deep reinforcement learning library

Pros of PFRL

  • More actively maintained with recent updates
  • Extensive documentation and examples
  • Supports a wider range of algorithms, including newer ones like SAC and TD3

Cons of PFRL

  • Steeper learning curve for beginners
  • Less focus on visualization tools compared to Coach
  • Primarily designed for PyTorch, which may limit flexibility for some users

Code Comparison

PFRL example (PPO implementation):

import pfrl

def make_agent(env):
    obs_size = env.observation_space.low.size
    n_actions = env.action_space.n
    model = pfrl.nn.MLPActorCritic(obs_size, n_actions)
    opt = torch.optim.Adam(model.parameters(), lr=3e-4)
    return pfrl.agents.PPO(model, opt, gpu=-1)

Coach example (PPO implementation):

from rl_coach.agents.ppo_agent import PPOAgentParameters
from rl_coach.core_types import TrainingSteps, EnvironmentEpisodes
from rl_coach.environments.gym_environment import GymEnvironment
from rl_coach.graph_managers.basic_rl_graph_manager import BasicRLGraphManager

graph_manager = BasicRLGraphManager(
    agent_params=PPOAgentParameters(),
    env_params=GymEnvironment(level='CartPole-v0'),
    schedule_params=ScheduleParameters(max_steps=TrainingSteps(10000))
)

Both libraries offer implementations of popular RL algorithms, but PFRL tends to provide more low-level control, while Coach focuses on higher-level abstractions and easier configuration.

1,910

A toolkit for reproducible reinforcement learning research.

Pros of garage

  • More modular and flexible architecture, allowing easier customization of algorithms
  • Better integration with TensorFlow and PyTorch
  • More extensive documentation and examples for researchers

Cons of garage

  • Smaller community and less frequent updates compared to Coach
  • Fewer pre-implemented algorithms and environments
  • Steeper learning curve for beginners due to its more academic focus

Code Comparison

garage example:

from garage import wrap_experiment
from garage.tf.algos import PPO
from garage.tf.baselines import GaussianMLPBaseline
from garage.tf.envs import TfEnv
from garage.tf.experiment import LocalTFRunner
from garage.tf.policies import GaussianMLPPolicy

@wrap_experiment
def ppo_cartpole(ctxt=None):
    with LocalTFRunner(ctxt) as runner:
        env = TfEnv(gym.make('CartPole-v1'))
        policy = GaussianMLPPolicy(env.spec)
        baseline = GaussianMLPBaseline(env.spec)
        algo = PPO(env_spec=env.spec, policy=policy, baseline=baseline)
        runner.setup(algo, env)
        runner.train(n_epochs=100, batch_size=4000)

Coach example:

from rl_coach.agents.ppo_agent import PPOAgentParameters
from rl_coach.base_parameters import VisualizationParameters
from rl_coach.core_types import TrainingSteps, EnvironmentEpisodes, EnvironmentSteps
from rl_coach.environments.gym_environment import GymVectorEnvironment
from rl_coach.graph_managers.basic_rl_graph_manager import BasicRLGraphManager
from rl_coach.graph_managers.graph_manager import ScheduleParameters
from rl_coach.schedules import LinearSchedule

graph_manager = BasicRLGraphManager(
    agent_params=PPOAgentParameters(),
    env_params=GymVectorEnvironment(level='CartPole-v0'),
    schedule_params=ScheduleParameters(improve_steps=TrainingSteps(10000),
                                       steps_between_evaluation_periods=EnvironmentEpisodes(50),
                                       evaluation_steps=EnvironmentEpisodes(5),
                                       heatup_steps=EnvironmentSteps(0))
)

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

:warning: DISCONTINUATION OF PROJECT - This project will no longer be maintained by Intel. Intel has ceased development and contributions including, but not limited to, maintenance, bug fixes, new releases, or updates, to this project. Intel no longer accepts patches to this project. If you have an ongoing need to use this project, are interested in independently developing it, or would like to maintain patches for the open source software community, please create your own fork of this project.

Coach

License Docs DOI Downloads Downloads

Coach Logo

Coach is a python reinforcement learning framework containing implementation of many state-of-the-art algorithms.

It exposes a set of easy-to-use APIs for experimenting with new RL algorithms, and allows simple integration of new environments to solve. Basic RL components (algorithms, environments, neural network architectures, exploration policies, ...) are well decoupled, so that extending and reusing existing components is fairly painless.

Training an agent to solve an environment is as easy as running:

coach -p CartPole_DQN -r

Fetch Slide Pendulum Starcraft
Doom Deathmatch CARLA MontezumaRevenge
Doom Health Gathering PyBullet Minitaur Gym Extensions Ant

Table of Contents

Benchmarks

One of the main challenges when building a research project, or a solution based on a published algorithm, is getting a concrete and reliable baseline that reproduces the algorithm's results, as reported by its authors. To address this problem, we are releasing a set of benchmarks that shows Coach reliably reproduces many state of the art algorithm results.

Installation

Note: Coach has only been tested on Ubuntu 16.04 LTS, and with Python 3.5.

For some information on installing on Ubuntu 17.10 with Python 3.6.3, please refer to the following issue: https://github.com/IntelLabs/coach/issues/54

In order to install coach, there are a few prerequisites required. This will setup all the basics needed to get the user going with running Coach on top of OpenAI Gym environments:

# General
sudo -E apt-get install python3-pip cmake zlib1g-dev python3-tk python-opencv -y

# Boost libraries
sudo -E apt-get install libboost-all-dev -y

# Scipy requirements
sudo -E apt-get install libblas-dev liblapack-dev libatlas-base-dev gfortran -y

# PyGame
sudo -E apt-get install libsdl-dev libsdl-image1.2-dev libsdl-mixer1.2-dev libsdl-ttf2.0-dev
libsmpeg-dev libportmidi-dev libavformat-dev libswscale-dev -y

# Dashboard
sudo -E apt-get install dpkg-dev build-essential python3.5-dev libjpeg-dev  libtiff-dev libsdl1.2-dev libnotify-dev 
freeglut3 freeglut3-dev libsm-dev libgtk2.0-dev libgtk-3-dev libwebkitgtk-dev libgtk-3-dev libwebkitgtk-3.0-dev
libgstreamer-plugins-base1.0-dev -y

# Gym
sudo -E apt-get install libav-tools libsdl2-dev swig cmake -y

We recommend installing coach in a virtualenv:

sudo -E pip3 install virtualenv
virtualenv -p python3 coach_env
. coach_env/bin/activate

Finally, install coach using pip:

pip3 install rl_coach

Or alternatively, for a development environment, install coach from the cloned repository:

cd coach
pip3 install -e .

If a GPU is present, Coach's pip package will install tensorflow-gpu, by default. If a GPU is not present, an Intel-Optimized TensorFlow, will be installed.

In addition to OpenAI Gym, several other environments were tested and are supported. Please follow the instructions in the Supported Environments section below in order to install more environments.

Getting Started

Tutorials and Documentation

Jupyter notebooks demonstrating how to run Coach from command line or as a library, implement an algorithm, or integrate an environment.

Framework documentation, algorithm description and instructions on how to contribute a new agent/environment.

Basic Usage

Running Coach

To allow reproducing results in Coach, we defined a mechanism called preset. There are several available presets under the presets directory. To list all the available presets use the -l flag.

To run a preset, use:

coach -r -p <preset_name>

For example:

  • CartPole environment using Policy Gradients (PG):

    coach -r -p CartPole_PG
    
  • Basic level of Doom using Dueling network and Double DQN (DDQN) algorithm:

    coach -r -p Doom_Basic_Dueling_DDQN
    

Some presets apply to a group of environment levels, like the entire Atari or Mujoco suites for example. To use these presets, the requeseted level should be defined using the -lvl flag.

For example:

  • Pong using the Neural Episodic Control (NEC) algorithm:

    coach -r -p Atari_NEC -lvl pong
    

There are several types of agents that can benefit from running them in a distributed fashion with multiple workers in parallel. Each worker interacts with its own copy of the environment but updates a shared network, which improves the data collection speed and the stability of the learning process. To specify the number of workers to run, use the -n flag.

For example:

  • Breakout using Asynchronous Advantage Actor-Critic (A3C) with 8 workers:

    coach -r -p Atari_A3C -lvl breakout -n 8
    

It is easy to create new presets for different levels or environments by following the same pattern as in presets.py

More usage examples can be found here.

Running Coach Dashboard (Visualization)

Training an agent to solve an environment can be tricky, at times.

In order to debug the training process, Coach outputs several signals, per trained algorithm, in order to track algorithmic performance.

While Coach trains an agent, a csv file containing the relevant training signals will be saved to the 'experiments' directory. Coach's dashboard can then be used to dynamically visualize the training signals, and track algorithmic behavior.

To use it, run:

dashboard
Coach Design

Distributed Multi-Node Coach

As of release 0.11.0, Coach supports horizontal scaling for training RL agents on multiple nodes. In release 0.11.0 this was tested on the ClippedPPO and DQN agents. For usage instructions please refer to the documentation here.

Batch Reinforcement Learning

Training and evaluating an agent from a dataset of experience, where no simulator is available, is supported in Coach. There are example presets and a tutorial.

Supported Environments

  • OpenAI Gym:

    Installed by default by Coach's installer (see note on MuJoCo version below).

  • ViZDoom:

    Follow the instructions described in the ViZDoom repository -

    https://github.com/mwydmuch/ViZDoom

    Additionally, Coach assumes that the environment variable VIZDOOM_ROOT points to the ViZDoom installation directory.

  • Roboschool:

    Follow the instructions described in the roboschool repository -

    https://github.com/openai/roboschool

  • GymExtensions:

    Follow the instructions described in the GymExtensions repository -

    https://github.com/Breakend/gym-extensions

    Additionally, add the installation directory to the PYTHONPATH environment variable.

  • PyBullet:

    Follow the instructions described in the Quick Start Guide (basically just - 'pip install pybullet')

  • CARLA:

    Download release 0.8.4 from the CARLA repository -

    https://github.com/carla-simulator/carla/releases

    Install the python client and dependencies from the release tarball:

    pip3 install -r PythonClient/requirements.txt
    pip3 install PythonClient
    

    Create a new CARLA_ROOT environment variable pointing to CARLA's installation directory.

    A simple CARLA settings file (CarlaSettings.ini) is supplied with Coach, and is located in the environments directory.

  • Starcraft:

    Follow the instructions described in the PySC2 repository -

    https://github.com/deepmind/pysc2

  • DeepMind Control Suite:

    Follow the instructions described in the DeepMind Control Suite repository -

    https://github.com/deepmind/dm_control

  • Robosuite:

    Note: To use Robosuite-based environments, please install Coach from the latest cloned repository. It is not yet available as part of the rl_coach package on PyPI.

    Follow the instructions described in the robosuite documentation (see note on MuJoCo version below).

Note on MuJoCo version

OpenAI Gym supports MuJoCo only up to version 1.5 (and corresponding mujoco-py version 1.50.x.x). The Robosuite simulation framework, however, requires MuJoCo version 2.0 (and corresponding mujoco-py version 2.0.2.9, as of robosuite version 1.2). Therefore, if you wish to run both Gym-based MuJoCo environments and Robosuite environments, it's recommended to have a separate virtual environment for each.

Please note that all Gym-Based MuJoCo presets in Coach (rl_coach/presets/Mujoco_*.py) have been validated only with MuJoCo 1.5 (including the reported benchmark results).

Supported Algorithms

Coach Design

Value Optimization Agents

Policy Optimization Agents

General Agents

Imitation Learning Agents

Hierarchical Reinforcement Learning Agents

Memory Types

Exploration Techniques

Citation

If you used Coach for your work, please use the following citation:

@misc{caspi_itai_2017_1134899,
  author       = {Caspi, Itai and
                  Leibovich, Gal and
                  Novik, Gal and
                  Endrawis, Shadi},
  title        = {Reinforcement Learning Coach},
  month        = dec,
  year         = 2017,
  doi          = {10.5281/zenodo.1134899},
  url          = {https://doi.org/10.5281/zenodo.1134899}
}

Contact

We'd be happy to get any questions or contributions through GitHub issues and PRs.

Please make sure to take a look here before filing an issue or proposing a PR.

The Coach development team can also be contacted over email

Disclaimer

Coach is released as a reference code for research purposes. It is not an official Intel product, and the level of quality and support may not be as expected from an official product. Additional algorithms and environments are planned to be added to the framework. Feedback and contributions from the open source and RL research communities are more than welcome.