Convert Figma logo to code with AI

hill-a logostable-baselines

A fork of OpenAI Baselines, implementations of reinforcement learning algorithms

4,124
724
4,124
132

Top Related Projects

15,630

OpenAI Baselines: high-quality implementations of reinforcement learning algorithms

PyTorch version of Stable Baselines, reliable implementations of reinforcement learning algorithms.

2,774

TF-Agents: A reliable, scalable and easy to use TensorFlow library for Contextual Bandits and Reinforcement Learning.

1,854

A toolkit for reproducible reinforcement learning research.

An API standard for single-agent reinforcement learning environments, with popular reference environments and related utilities (formerly Gym)

Quick Overview

Stable Baselines is a set of improved implementations of reinforcement learning algorithms based on OpenAI Baselines. It provides a unified interface for various RL algorithms, making it easier for researchers and practitioners to experiment with and compare different approaches. The library is designed to be user-friendly, well-documented, and easily extensible.

Pros

  • Unified interface for multiple RL algorithms (e.g., A2C, PPO, DQN, SAC)
  • Well-documented with extensive tutorials and examples
  • Actively maintained and regularly updated
  • Integrates well with OpenAI Gym environments

Cons

  • Limited support for custom neural network architectures
  • Some advanced RL techniques are not implemented
  • Performance may be slower compared to specialized implementations
  • Requires some understanding of RL concepts for effective use

Code Examples

  1. Training a PPO agent on the CartPole environment:
from stable_baselines3 import PPO
from stable_baselines3.common.env_util import make_vec_env

# Create the environment
env = make_vec_env('CartPole-v1', n_envs=4)

# Initialize the agent
model = PPO('MlpPolicy', env, verbose=1)

# Train the agent
model.learn(total_timesteps=25000)

# Save the trained model
model.save("ppo_cartpole")
  1. Loading a trained model and evaluating its performance:
from stable_baselines3 import PPO
from stable_baselines3.common.evaluation import evaluate_policy
import gym

# Load the trained model
model = PPO.load("ppo_cartpole")

# Create the environment
env = gym.make('CartPole-v1')

# Evaluate the agent
mean_reward, std_reward = evaluate_policy(model, env, n_eval_episodes=10)
print(f"Mean reward: {mean_reward:.2f} +/- {std_reward:.2f}")
  1. Using a custom policy network:
import gym
import torch as th
import torch.nn as nn
from stable_baselines3 import PPO
from stable_baselines3.common.torch_layers import BaseFeaturesExtractor

class CustomCNN(BaseFeaturesExtractor):
    def __init__(self, observation_space: gym.spaces.Box, features_dim: int = 256):
        super(CustomCNN, self).__init__(observation_space, features_dim)
        n_input_channels = observation_space.shape[0]
        self.cnn = nn.Sequential(
            nn.Conv2d(n_input_channels, 32, kernel_size=8, stride=4, padding=0),
            nn.ReLU(),
            nn.Conv2d(32, 64, kernel_size=4, stride=2, padding=0),
            nn.ReLU(),
            nn.Flatten(),
        )

        # Compute shape by doing one forward pass
        with th.no_grad():
            n_flatten = self.cnn(
                th.as_tensor(observation_space.sample()[None]).float()
            ).shape[1]

        self.linear = nn.Sequential(nn.Linear(n_flatten, features_dim), nn.ReLU())

    def forward(self, observations: th.Tensor) -> th.Tensor:
        return self.linear(self.cnn(observations))

policy_kwargs = dict(
    features_extractor_class=CustomCNN,
    features_extractor_kwargs=dict(features_dim=128),
)
model = PPO("CnnPolicy", "BreakoutNoFrameskip-v4", policy_kwargs=policy_kwargs, verbose=1)
model.learn(1000000)

Getting Started

To get started with Stable Baselines3, follow these steps:

  1. Install the library:
pip install stable-baselines3[extra]
  1. Import the desired algorithm and create an environment:
from stable_baselines3 import PPO
import gym

env = gym.make('CartPole-v1')
model = PPO('MlpPolicy', env,

Competitor Comparisons

15,630

OpenAI Baselines: high-quality implementations of reinforcement learning algorithms

Pros of Baselines

  • Original implementation by OpenAI, often considered the reference implementation
  • Supports a wider range of algorithms, including some less common ones
  • More actively maintained with frequent updates and contributions

Cons of Baselines

  • Less user-friendly, with a steeper learning curve
  • Documentation can be sparse and sometimes outdated
  • Less focus on code readability and maintainability

Code Comparison

Baselines:

from baselines.common.vec_env import DummyVecEnv
from baselines.deepq import DQN

env = DummyVecEnv([lambda: gym.make("CartPole-v0")])
model = DQN("MlpPolicy", env, verbose=1)
model.learn(total_timesteps=25000)

Stable-Baselines:

from stable_baselines3 import DQN
from stable_baselines3.common.vec_env import DummyVecEnv

env = DummyVecEnv([lambda: gym.make("CartPole-v0")])
model = DQN("MlpPolicy", env, verbose=1)
model.learn(total_timesteps=25000)

The code comparison shows that Stable-Baselines offers a more streamlined and intuitive API, with similar functionality achieved in fewer lines of code. This reflects its focus on user-friendliness and ease of use compared to Baselines.

PyTorch version of Stable Baselines, reliable implementations of reinforcement learning algorithms.

Pros of stable-baselines3

  • Improved code quality and maintainability
  • Better documentation and examples
  • Support for PyTorch, enabling GPU acceleration and easier customization

Cons of stable-baselines3

  • Fewer implemented algorithms compared to stable-baselines
  • Potential compatibility issues with older projects using stable-baselines

Code Comparison

stable-baselines:

from stable_baselines import PPO2
model = PPO2('MlpPolicy', 'CartPole-v1').learn(10000)

stable-baselines3:

from stable_baselines3 import PPO
model = PPO('MlpPolicy', 'CartPole-v1').learn(10000)

The main difference in usage is the import statement and slight changes in class names. stable-baselines3 simplifies the naming convention by removing version numbers from algorithm names.

Both libraries provide similar functionality for training and deploying reinforcement learning agents. stable-baselines3 offers a more modern and maintainable codebase, while stable-baselines may have a broader range of implemented algorithms. Users should consider their specific requirements and the trade-offs between the two libraries when choosing which one to use for their projects.

2,774

TF-Agents: A reliable, scalable and easy to use TensorFlow library for Contextual Bandits and Reinforcement Learning.

Pros of TensorFlow Agents

  • Deeper integration with TensorFlow ecosystem
  • More extensive documentation and tutorials
  • Broader range of algorithms and environments supported

Cons of TensorFlow Agents

  • Steeper learning curve for beginners
  • Less focus on simplicity and ease of use
  • Potentially slower development cycle for quick prototyping

Code Comparison

Stable Baselines:

from stable_baselines3 import PPO
model = PPO("MlpPolicy", "CartPole-v1", verbose=1)
model.learn(total_timesteps=10000)

TensorFlow Agents:

from tf_agents.agents.ppo import ppo_agent
agent = ppo_agent.PPOAgent(
    time_step_spec,
    action_spec,
    optimizer=tf.compat.v1.train.AdamOptimizer(learning_rate=1e-3),
    actor_net=actor_net,
    value_net=value_net)

Summary

Stable Baselines focuses on simplicity and ease of use, making it ideal for beginners and quick prototyping. TensorFlow Agents offers more advanced features and integration with the TensorFlow ecosystem, but with a steeper learning curve. The choice between the two depends on the user's familiarity with TensorFlow and the complexity of the project at hand.

1,854

A toolkit for reproducible reinforcement learning research.

Pros of garage

  • More extensive algorithm support, including meta-learning algorithms
  • Better support for custom environments and neural network architectures
  • More comprehensive documentation and examples

Cons of garage

  • Steeper learning curve due to higher complexity
  • Less active community and fewer updates compared to Stable Baselines
  • May be overkill for simpler reinforcement learning tasks

Code Comparison

garage:

from garage import wrap_experiment
from garage.tf.algos import PPO
from garage.tf.policies import GaussianMLPPolicy

@wrap_experiment
def my_experiment(ctxt=None):
    policy = GaussianMLPPolicy(env_spec=env.spec)
    algo = PPO(env_spec=env.spec, policy=policy)

Stable Baselines:

from stable_baselines import PPO2
from stable_baselines.common.policies import MlpPolicy

model = PPO2(MlpPolicy, env)
model.learn(total_timesteps=10000)

Both libraries offer implementations of popular reinforcement learning algorithms, but garage provides more flexibility and customization options at the cost of increased complexity. Stable Baselines focuses on simplicity and ease of use, making it more suitable for beginners or quick prototyping. The code comparison shows that garage requires more setup but allows for greater control over the experiment structure.

An API standard for single-agent reinforcement learning environments, with popular reference environments and related utilities (formerly Gym)

Pros of Gymnasium

  • More active development and community support
  • Broader range of environments and tools for reinforcement learning
  • Better compatibility with modern Python versions and libraries

Cons of Gymnasium

  • Steeper learning curve for beginners
  • Less integrated with pre-trained models and algorithms
  • May require more setup and configuration for complex tasks

Code Comparison

Gymnasium:

import gymnasium as gym
env = gym.make("CartPole-v1")
observation, info = env.reset(seed=42)
for _ in range(1000):
    action = env.action_space.sample()
    observation, reward, terminated, truncated, info = env.step(action)

Stable-Baselines:

from stable_baselines3 import PPO
env = gym.make("CartPole-v1")
model = PPO("MlpPolicy", env, verbose=1)
model.learn(total_timesteps=10000)
obs = env.reset()
for i in range(1000):
    action, _states = model.predict(obs, deterministic=True)
    obs, reward, done, info = env.step(action)

Both libraries provide tools for reinforcement learning, but Gymnasium focuses on environments and interfaces, while Stable-Baselines offers pre-implemented algorithms and models. Gymnasium is more flexible and up-to-date, while Stable-Baselines provides a more streamlined experience for quick implementation of RL algorithms.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

WARNING: This package is in maintenance mode, please use Stable-Baselines3 (SB3) for an up-to-date version. You can find a migration guide in SB3 documentation.

CI Build Status Documentation Status Codacy Badge Codacy Badge

Stable Baselines

Stable Baselines is a set of improved implementations of reinforcement learning algorithms based on OpenAI Baselines.

You can read a detailed presentation of Stable Baselines in the Medium article.

These algorithms will make it easier for the research community and industry to replicate, refine, and identify new ideas, and will create good baselines to build projects on top of. We expect these tools will be used as a base around which new ideas can be added, and as a tool for comparing a new approach against existing ones. We also hope that the simplicity of these tools will allow beginners to experiment with a more advanced toolset, without being buried in implementation details.

Note: despite its simplicity of use, Stable Baselines (SB) assumes you have some knowledge about Reinforcement Learning (RL). You should not utilize this library without some practice. To that extent, we provide good resources in the documentation to get started with RL.

Main differences with OpenAI Baselines

This toolset is a fork of OpenAI Baselines, with a major structural refactoring, and code cleanups:

  • Unified structure for all algorithms
  • PEP8 compliant (unified code style)
  • Documented functions and classes
  • More tests & more code coverage
  • Additional algorithms: SAC and TD3 (+ HER support for DQN, DDPG, SAC and TD3)
FeaturesStable-BaselinesOpenAI Baselines
State of the art RL methods:heavy_check_mark: (1):heavy_check_mark:
Documentation:heavy_check_mark::x:
Custom environments:heavy_check_mark::heavy_check_mark:
Custom policies:heavy_check_mark::heavy_minus_sign: (2)
Common interface:heavy_check_mark::heavy_minus_sign: (3)
Tensorboard support:heavy_check_mark::heavy_minus_sign: (4)
Ipython / Notebook friendly:heavy_check_mark::x:
PEP8 code style:heavy_check_mark::heavy_check_mark: (5)
Custom callback:heavy_check_mark::heavy_minus_sign: (6)

(1): Forked from previous version of OpenAI baselines, with now SAC and TD3 in addition
(2): Currently not available for DDPG, and only from the run script.
(3): Only via the run script.
(4): Rudimentary logging of training information (no loss nor graph).
(5): EDIT: you did it OpenAI! :cat:
(6): Passing a callback function is only available for DQN

Documentation

Documentation is available online: https://stable-baselines.readthedocs.io/

RL Baselines Zoo: A Collection of 100+ Trained RL Agents

RL Baselines Zoo. is a collection of pre-trained Reinforcement Learning agents using Stable-Baselines.

It also provides basic scripts for training, evaluating agents, tuning hyperparameters and recording videos.

Goals of this repository:

  1. Provide a simple interface to train and enjoy RL agents
  2. Benchmark the different Reinforcement Learning algorithms
  3. Provide tuned hyperparameters for each environment and RL algorithm
  4. Have fun with the trained agents!

Github repo: https://github.com/araffin/rl-baselines-zoo

Documentation: https://stable-baselines.readthedocs.io/en/master/guide/rl_zoo.html

Installation

Note: Stable-Baselines supports Tensorflow versions from 1.8.0 to 1.14.0. Support for Tensorflow 2 API is planned.

Prerequisites

Baselines requires python3 (>=3.5) with the development headers. You'll also need system packages CMake, OpenMPI and zlib. Those can be installed as follows

Ubuntu

sudo apt-get update && sudo apt-get install cmake libopenmpi-dev python3-dev zlib1g-dev

Mac OS X

Installation of system packages on Mac requires Homebrew. With Homebrew installed, run the following:

brew install cmake openmpi

Windows 10

To install stable-baselines on Windows, please look at the documentation.

Install using pip

Install the Stable Baselines package:

pip install stable-baselines[mpi]

This includes an optional dependency on MPI, enabling algorithms DDPG, GAIL, PPO1 and TRPO. If you do not need these algorithms, you can install without MPI:

pip install stable-baselines

Please read the documentation for more details and alternatives (from source, using docker).

Example

Most of the library tries to follow a sklearn-like syntax for the Reinforcement Learning algorithms.

Here is a quick example of how to train and run PPO2 on a cartpole environment:

import gym

from stable_baselines.common.policies import MlpPolicy
from stable_baselines.common.vec_env import DummyVecEnv
from stable_baselines import PPO2

env = gym.make('CartPole-v1')
# Optional: PPO2 requires a vectorized environment to run
# the env is now wrapped automatically when passing it to the constructor
# env = DummyVecEnv([lambda: env])

model = PPO2(MlpPolicy, env, verbose=1)
model.learn(total_timesteps=10000)

obs = env.reset()
for i in range(1000):
    action, _states = model.predict(obs)
    obs, rewards, dones, info = env.step(action)
    env.render()

env.close()

Or just train a model with a one liner if the environment is registered in Gym and if the policy is registered:

from stable_baselines import PPO2

model = PPO2('MlpPolicy', 'CartPole-v1').learn(10000)

Please read the documentation for more examples.

Try it online with Colab Notebooks !

All the following examples can be executed online using Google colab notebooks:

Implemented Algorithms

NameRefactored(1)RecurrentBoxDiscreteMultiDiscreteMultiBinaryMulti Processing
A2C:heavy_check_mark::heavy_check_mark::heavy_check_mark::heavy_check_mark::heavy_check_mark::heavy_check_mark::heavy_check_mark:
ACER:heavy_check_mark::heavy_check_mark::x: (5):heavy_check_mark::x::x::heavy_check_mark:
ACKTR:heavy_check_mark::heavy_check_mark::heavy_check_mark::heavy_check_mark::x::x::heavy_check_mark:
DDPG:heavy_check_mark::x::heavy_check_mark::x::x::x::heavy_check_mark: (4)
DQN:heavy_check_mark::x::x::heavy_check_mark::x::x::x:
GAIL (2):heavy_check_mark::x::heavy_check_mark::heavy_check_mark::x::x::heavy_check_mark: (4)
HER (3):heavy_check_mark::x::heavy_check_mark::heavy_check_mark::x::heavy_check_mark::x:
PPO1:heavy_check_mark::x::heavy_check_mark::heavy_check_mark::heavy_check_mark::heavy_check_mark::heavy_check_mark: (4)
PPO2:heavy_check_mark::heavy_check_mark::heavy_check_mark::heavy_check_mark::heavy_check_mark::heavy_check_mark::heavy_check_mark:
SAC:heavy_check_mark::x::heavy_check_mark::x::x::x::x:
TD3:heavy_check_mark::x::heavy_check_mark::x::x::x::x:
TRPO:heavy_check_mark::x::heavy_check_mark::heavy_check_mark::heavy_check_mark::heavy_check_mark::heavy_check_mark: (4)

(1): Whether or not the algorithm has be refactored to fit the BaseRLModel class.
(2): Only implemented for TRPO.
(3): Re-implemented from scratch, now supports DQN, DDPG, SAC and TD3
(4): Multi Processing with MPI.
(5): TODO, in project scope.

NOTE: Soft Actor-Critic (SAC) and Twin Delayed DDPG (TD3) were not part of the original baselines and HER was reimplemented from scratch.

Actions gym.spaces:

  • Box: A N-dimensional box that containes every point in the action space.
  • Discrete: A list of possible actions, where each timestep only one of the actions can be used.
  • MultiDiscrete: A list of possible actions, where each timestep only one action of each discrete set can be used.
  • MultiBinary: A list of possible actions, where each timestep any of the actions can be used in any combination.

MuJoCo

Some of the baselines examples use MuJoCo (multi-joint dynamics in contact) physics simulator, which is proprietary and requires binaries and a license (temporary 30-day license can be obtained from www.mujoco.org). Instructions on setting up MuJoCo can be found here

Testing the installation

All unit tests in baselines can be run using pytest runner:

pip install pytest pytest-cov
make pytest

Projects Using Stable-Baselines

We try to maintain a list of project using stable-baselines in the documentation, please tell us when if you want your project to appear on this page ;)

Citing the Project

To cite this repository in publications:

@misc{stable-baselines,
  author = {Hill, Ashley and Raffin, Antonin and Ernestus, Maximilian and Gleave, Adam and Kanervisto, Anssi and Traore, Rene and Dhariwal, Prafulla and Hesse, Christopher and Klimov, Oleg and Nichol, Alex and Plappert, Matthias and Radford, Alec and Schulman, John and Sidor, Szymon and Wu, Yuhuai},
  title = {Stable Baselines},
  year = {2018},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/hill-a/stable-baselines}},
}

Maintainers

Stable-Baselines is currently maintained by Ashley Hill (aka @hill-a), Antonin Raffin (aka @araffin), Maximilian Ernestus (aka @ernestum), Adam Gleave (@AdamGleave) and Anssi Kanervisto (@Miffyli).

Important Note: We do not do technical support, nor consulting and don't answer personal questions per email.

How To Contribute

To any interested in making the baselines better, there is still some documentation that needs to be done. If you want to contribute, please read CONTRIBUTING.md guide first.

Acknowledgments

Stable Baselines was created in the robotics lab U2IS (INRIA Flowers team) at ENSTA ParisTech.

Logo credits: L.M. Tenkes