stable-baselines3
PyTorch version of Stable Baselines, reliable implementations of reinforcement learning algorithms.
Top Related Projects
OpenAI Baselines: high-quality implementations of reinforcement learning algorithms
A fork of OpenAI Baselines, implementations of reinforcement learning algorithms
TF-Agents: A reliable, scalable and easy to use TensorFlow library for Contextual Bandits and Reinforcement Learning.
Ray is a unified framework for scaling AI and Python applications. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.
A high-performance distributed training framework for Reinforcement Learning
Quick Overview
Stable Baselines3 (SB3) is a set of reliable implementations of reinforcement learning algorithms in PyTorch. It is the next major version of Stable Baselines, a fork of OpenAI Baselines, and aims to provide clean code, good documentation, and quality implementations of popular RL algorithms.
Pros
- Well-documented and easy to use for both beginners and experienced researchers
- Implements various popular RL algorithms with consistent interfaces
- Actively maintained with regular updates and bug fixes
- Integrates well with OpenAI Gym environments and provides tools for custom environment creation
Cons
- Limited to PyTorch, which may not suit users preferring other frameworks
- Some advanced RL algorithms or recent innovations might not be included
- Performance may not always match specialized implementations of specific algorithms
- Requires understanding of reinforcement learning concepts for effective use
Code Examples
- Training a PPO agent on the CartPole environment:
from stable_baselines3 import PPO
from stable_baselines3.common.env_util import make_vec_env
# Create environment
env = make_vec_env("CartPole-v1", n_envs=4)
# Initialize and train the agent
model = PPO("MlpPolicy", env, verbose=1)
model.learn(total_timesteps=25000)
# Save the trained model
model.save("ppo_cartpole")
- Loading a pre-trained model and evaluating it:
from stable_baselines3 import PPO
from stable_baselines3.common.evaluation import evaluate_policy
import gym
# Load pre-trained model
model = PPO.load("ppo_cartpole")
# Create environment for evaluation
env = gym.make("CartPole-v1")
# Evaluate the agent
mean_reward, std_reward = evaluate_policy(model, env, n_eval_episodes=10)
print(f"Mean reward: {mean_reward:.2f} +/- {std_reward:.2f}")
- Using a custom policy network:
import gym
import torch as th
from torch import nn
from stable_baselines3 import A2C
from stable_baselines3.common.torch_layers import BaseFeaturesExtractor
class CustomCNN(BaseFeaturesExtractor):
def __init__(self, observation_space: gym.spaces.Box, features_dim: int = 256):
super().__init__(observation_space, features_dim)
n_input_channels = observation_space.shape[0]
self.cnn = nn.Sequential(
nn.Conv2d(n_input_channels, 32, kernel_size=8, stride=4, padding=0),
nn.ReLU(),
nn.Conv2d(32, 64, kernel_size=4, stride=2, padding=0),
nn.ReLU(),
nn.Flatten(),
)
# Compute shape by doing one forward pass
with th.no_grad():
n_flatten = self.cnn(
th.as_tensor(observation_space.sample()[None]).float()
).shape[1]
self.linear = nn.Sequential(nn.Linear(n_flatten, features_dim), nn.ReLU())
def forward(self, observations: th.Tensor) -> th.Tensor:
return self.linear(self.cnn(observations))
policy_kwargs = dict(
features_extractor_class=CustomCNN,
features_extractor_kwargs=dict(features_dim=128),
)
model = A2C("CnnPolicy", "BreakoutNoFrameskip-v4", policy_kwargs=policy_kwargs, verbose=1)
model.learn(total_timesteps=1000)
Getting Started
To get started with Stable Baselines3, follow these steps:
-
Install the library:
pip install stable-baselines3[extra]
-
Import the desired algorithm and create an environment:
from stable_baselines3 import PPO import gym env = gym.make("CartPole
Competitor Comparisons
OpenAI Baselines: high-quality implementations of reinforcement learning algorithms
Pros of baselines
- Developed by OpenAI, a leading AI research organization
- Includes a wider variety of RL algorithms
- More established and battle-tested in research environments
Cons of baselines
- Less actively maintained and updated
- Documentation is less comprehensive and user-friendly
- Code structure is more complex and harder to understand for beginners
Code Comparison
baselines:
from baselines import deepq
from baselines.common.atari_wrappers import wrap_deepmind
env = wrap_deepmind(gym.make("PongNoFrameskip-v4"))
model = deepq.learn(env, network='conv_only', total_timesteps=100000)
stable-baselines3:
from stable_baselines3 import DQN
from stable_baselines3.common.env_util import make_atari_env
env = make_atari_env("PongNoFrameskip-v4", n_envs=1, seed=0)
model = DQN("CnnPolicy", env, verbose=1)
model.learn(total_timesteps=100000)
The code comparison shows that stable-baselines3 offers a more streamlined and intuitive API, making it easier for users to implement and train RL agents. baselines, while powerful, requires more setup and configuration.
A fork of OpenAI Baselines, implementations of reinforcement learning algorithms
Pros of stable-baselines
- More established and mature project with a longer history
- Supports a wider range of algorithms and environments
- Better documentation and more extensive examples
Cons of stable-baselines
- Uses older TensorFlow 1.x, which is less performant and harder to maintain
- Less active development and slower updates
- Lacks some modern features and optimizations found in PyTorch-based implementations
Code Comparison
stable-baselines:
from stable_baselines import PPO2
model = PPO2('MlpPolicy', 'CartPole-v1', verbose=1)
model.learn(total_timesteps=10000)
stable-baselines3:
from stable_baselines3 import PPO
model = PPO('MlpPolicy', 'CartPole-v1', verbose=1)
model.learn(total_timesteps=10000)
The code structure is similar, but stable-baselines3 uses updated import statements and slightly different class names. The core functionality remains largely the same, making migration between the two libraries relatively straightforward for most use cases.
TF-Agents: A reliable, scalable and easy to use TensorFlow library for Contextual Bandits and Reinforcement Learning.
Pros of TF-Agents
- Built on TensorFlow, offering seamless integration with TensorFlow ecosystem
- Provides a wider range of algorithms and environments
- Better support for distributed training and deployment
Cons of TF-Agents
- Steeper learning curve, especially for those new to TensorFlow
- Less user-friendly documentation compared to Stable Baselines3
- Slower development cycle and community updates
Code Comparison
Stable Baselines3 (PPO implementation):
from stable_baselines3 import PPO
model = PPO("MlpPolicy", "CartPole-v1", verbose=1)
model.learn(total_timesteps=10000)
TF-Agents (PPO implementation):
from tf_agents.agents.ppo import ppo_agent
from tf_agents.networks import actor_distribution_network, value_network
actor_net = actor_distribution_network.ActorDistributionNetwork(
obs_spec, action_spec, fc_layer_params=(200, 100))
value_net = value_network.ValueNetwork(obs_spec, fc_layer_params=(200, 100))
agent = ppo_agent.PPOAgent(
time_step_spec, action_spec, actor_net, value_net,
optimizer=tf.compat.v1.train.AdamOptimizer(learning_rate=1e-3))
Ray is a unified framework for scaling AI and Python applications. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.
Pros of Ray
- Broader scope: Ray is a general-purpose distributed computing framework, offering more versatility beyond reinforcement learning
- Scalability: Designed for large-scale distributed computing, making it suitable for bigger projects and clusters
- Ecosystem: Includes libraries for various tasks like hyperparameter tuning (Ray Tune) and reinforcement learning (RLlib)
Cons of Ray
- Steeper learning curve: More complex to set up and use due to its broader feature set
- Overhead: May introduce unnecessary complexity for smaller projects focused solely on reinforcement learning
- Documentation: Can be overwhelming due to the wide range of features and use cases
Code Comparison
Ray (RLlib) example:
import ray
from ray import tune
from ray.rllib.agents import ppo
ray.init()
tune.run(ppo.PPOTrainer, config={"env": "CartPole-v0"})
Stable-Baselines3 example:
from stable_baselines3 import PPO
from stable_baselines3.common.env_util import make_vec_env
env = make_vec_env("CartPole-v1", n_envs=4)
model = PPO("MlpPolicy", env, verbose=1)
model.learn(total_timesteps=25000)
A high-performance distributed training framework for Reinforcement Learning
Pros of PARL
- Built on PaddlePaddle, offering better integration with Baidu's AI ecosystem
- Supports distributed training out-of-the-box
- Includes a wider range of algorithms, including multi-agent reinforcement learning
Cons of PARL
- Less extensive documentation compared to Stable-baselines3
- Smaller community and fewer third-party resources
- Steeper learning curve for those not familiar with PaddlePaddle
Code Comparison
PARL example:
import parl
from parl import layers
class Model(parl.Model):
def __init__(self, act_dim):
self.fc1 = layers.fc(size=128, act='relu')
self.fc2 = layers.fc(size=act_dim)
Stable-baselines3 example:
from stable_baselines3 import PPO
model = PPO("MlpPolicy", "CartPole-v1", verbose=1)
model.learn(total_timesteps=10000)
Both libraries offer easy-to-use APIs for implementing reinforcement learning algorithms. PARL provides more flexibility in defining custom models, while Stable-baselines3 focuses on simplicity and quick implementation of common algorithms.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
Stable Baselines3
Stable Baselines3 (SB3) is a set of reliable implementations of reinforcement learning algorithms in PyTorch. It is the next major version of Stable Baselines.
You can read a detailed presentation of Stable Baselines3 in the v1.0 blog post or our JMLR paper.
These algorithms will make it easier for the research community and industry to replicate, refine, and identify new ideas, and will create good baselines to build projects on top of. We expect these tools will be used as a base around which new ideas can be added, and as a tool for comparing a new approach against existing ones. We also hope that the simplicity of these tools will allow beginners to experiment with a more advanced toolset, without being buried in implementation details.
Note: Despite its simplicity of use, Stable Baselines3 (SB3) assumes you have some knowledge about Reinforcement Learning (RL). You should not utilize this library without some practice. To that extent, we provide good resources in the documentation to get started with RL.
Main Features
The performance of each algorithm was tested (see Results section in their respective page), you can take a look at the issues #48 and #49 for more details.
Features | Stable-Baselines3 |
---|---|
State of the art RL methods | :heavy_check_mark: |
Documentation | :heavy_check_mark: |
Custom environments | :heavy_check_mark: |
Custom policies | :heavy_check_mark: |
Common interface | :heavy_check_mark: |
Dict observation space support | :heavy_check_mark: |
Ipython / Notebook friendly | :heavy_check_mark: |
Tensorboard support | :heavy_check_mark: |
PEP8 code style | :heavy_check_mark: |
Custom callback | :heavy_check_mark: |
High code coverage | :heavy_check_mark: |
Type hints | :heavy_check_mark: |
Planned features
Please take a look at the Roadmap and Milestones.
Migration guide: from Stable-Baselines (SB2) to Stable-Baselines3 (SB3)
A migration guide from SB2 to SB3 can be found in the documentation.
Documentation
Documentation is available online: https://stable-baselines3.readthedocs.io/
Integrations
Stable-Baselines3 has some integration with other libraries/services like Weights & Biases for experiment tracking or Hugging Face for storing/sharing trained models. You can find out more in the dedicated section of the documentation.
RL Baselines3 Zoo: A Training Framework for Stable Baselines3 Reinforcement Learning Agents
RL Baselines3 Zoo is a training framework for Reinforcement Learning (RL).
It provides scripts for training, evaluating agents, tuning hyperparameters, plotting results and recording videos.
In addition, it includes a collection of tuned hyperparameters for common environments and RL algorithms, and agents trained with those settings.
Goals of this repository:
- Provide a simple interface to train and enjoy RL agents
- Benchmark the different Reinforcement Learning algorithms
- Provide tuned hyperparameters for each environment and RL algorithm
- Have fun with the trained agents!
Github repo: https://github.com/DLR-RM/rl-baselines3-zoo
Documentation: https://rl-baselines3-zoo.readthedocs.io/en/master/
SB3-Contrib: Experimental RL Features
We implement experimental features in a separate contrib repository: SB3-Contrib
This allows SB3 to maintain a stable and compact core, while still providing the latest features, like Recurrent PPO (PPO LSTM), Truncated Quantile Critics (TQC), Quantile Regression DQN (QR-DQN) or PPO with invalid action masking (Maskable PPO).
Documentation is available online: https://sb3-contrib.readthedocs.io/
Stable-Baselines Jax (SBX)
Stable Baselines Jax (SBX) is a proof of concept version of Stable-Baselines3 in Jax, with recent algorithms like DroQ or CrossQ.
It provides a minimal number of features compared to SB3 but can be much faster (up to 20x times!): https://twitter.com/araffin2/status/1590714558628253698
Installation
Note: Stable-Baselines3 supports PyTorch >= 1.13
Prerequisites
Stable Baselines3 requires Python 3.8+.
Windows 10
To install stable-baselines on Windows, please look at the documentation.
Install using pip
Install the Stable Baselines3 package:
pip install stable-baselines3[extra]
Note: Some shells such as Zsh require quotation marks around brackets, i.e. pip install 'stable-baselines3[extra]'
(More Info).
This includes an optional dependencies like Tensorboard, OpenCV or ale-py
to train on atari games. If you do not need those, you can use:
pip install stable-baselines3
Please read the documentation for more details and alternatives (from source, using docker).
Example
Most of the code in the library tries to follow a sklearn-like syntax for the Reinforcement Learning algorithms.
Here is a quick example of how to train and run PPO on a cartpole environment:
import gymnasium as gym
from stable_baselines3 import PPO
env = gym.make("CartPole-v1", render_mode="human")
model = PPO("MlpPolicy", env, verbose=1)
model.learn(total_timesteps=10_000)
vec_env = model.get_env()
obs = vec_env.reset()
for i in range(1000):
action, _states = model.predict(obs, deterministic=True)
obs, reward, done, info = vec_env.step(action)
vec_env.render()
# VecEnv resets automatically
# if done:
# obs = env.reset()
env.close()
Or just train a model with a one liner if the environment is registered in Gymnasium and if the policy is registered:
from stable_baselines3 import PPO
model = PPO("MlpPolicy", "CartPole-v1").learn(10_000)
Please read the documentation for more examples.
Try it online with Colab Notebooks !
All the following examples can be executed online using Google Colab notebooks:
- Full Tutorial
- All Notebooks
- Getting Started
- Training, Saving, Loading
- Multiprocessing
- Monitor Training and Plotting
- Atari Games
- RL Baselines Zoo
- PyBullet
Implemented Algorithms
Name | Recurrent | Box | Discrete | MultiDiscrete | MultiBinary | Multi Processing |
---|---|---|---|---|---|---|
ARS1 | :x: | :heavy_check_mark: | :heavy_check_mark: | :x: | :x: | :heavy_check_mark: |
A2C | :x: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
DDPG | :x: | :heavy_check_mark: | :x: | :x: | :x: | :heavy_check_mark: |
DQN | :x: | :x: | :heavy_check_mark: | :x: | :x: | :heavy_check_mark: |
HER | :x: | :heavy_check_mark: | :heavy_check_mark: | :x: | :x: | :heavy_check_mark: |
PPO | :x: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
QR-DQN1 | :x: | :x: | :heavy_check_mark: | :x: | :x: | :heavy_check_mark: |
RecurrentPPO1 | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
SAC | :x: | :heavy_check_mark: | :x: | :x: | :x: | :heavy_check_mark: |
TD3 | :x: | :heavy_check_mark: | :x: | :x: | :x: | :heavy_check_mark: |
TQC1 | :x: | :heavy_check_mark: | :x: | :x: | :x: | :heavy_check_mark: |
TRPO1 | :x: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
Maskable PPO1 | :x: | :x: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
1: Implemented in SB3 Contrib GitHub repository.
Actions gym.spaces
:
Box
: A N-dimensional box that contains every point in the action space.Discrete
: A list of possible actions, where each timestep only one of the actions can be used.MultiDiscrete
: A list of possible actions, where each timestep only one action of each discrete set can be used.MultiBinary
: A list of possible actions, where each timestep any of the actions can be used in any combination.
Testing the installation
Install dependencies
pip install -e .[docs,tests,extra]
Run tests
All unit tests in stable baselines3 can be run using pytest
runner:
make pytest
To run a single test file:
python3 -m pytest -v tests/test_env_checker.py
To run a single test:
python3 -m pytest -v -k 'test_check_env_dict_action'
You can also do a static type check using pytype
and mypy
:
pip install pytype mypy
make type
Codestyle check with ruff
:
pip install ruff
make lint
Projects Using Stable-Baselines3
We try to maintain a list of projects using stable-baselines3 in the documentation, please tell us if you want your project to appear on this page ;)
Citing the Project
To cite this repository in publications:
@article{stable-baselines3,
author = {Antonin Raffin and Ashley Hill and Adam Gleave and Anssi Kanervisto and Maximilian Ernestus and Noah Dormann},
title = {Stable-Baselines3: Reliable Reinforcement Learning Implementations},
journal = {Journal of Machine Learning Research},
year = {2021},
volume = {22},
number = {268},
pages = {1-8},
url = {http://jmlr.org/papers/v22/20-1364.html}
}
Maintainers
Stable-Baselines3 is currently maintained by Ashley Hill (aka @hill-a), Antonin Raffin (aka @araffin), Maximilian Ernestus (aka @ernestum), Adam Gleave (@AdamGleave), Anssi Kanervisto (@Miffyli) and Quentin Gallouédec (@qgallouedec).
Important Note: We do not provide technical support, or consulting and do not answer personal questions via email. Please post your question on the RL Discord, Reddit, or Stack Overflow in that case.
How To Contribute
To any interested in making the baselines better, there is still some documentation that needs to be done. If you want to contribute, please read CONTRIBUTING.md guide first.
Acknowledgments
The initial work to develop Stable Baselines3 was partially funded by the project Reduced Complexity Models from the Helmholtz-Gemeinschaft Deutscher Forschungszentren, and by the EU's Horizon 2020 Research and Innovation Programme under grant number 951992 (VeriDream).
The original version, Stable Baselines, was created in the robotics lab U2IS (INRIA Flowers team) at ENSTA ParisTech.
Logo credits: L.M. Tenkes
Top Related Projects
OpenAI Baselines: high-quality implementations of reinforcement learning algorithms
A fork of OpenAI Baselines, implementations of reinforcement learning algorithms
TF-Agents: A reliable, scalable and easy to use TensorFlow library for Contextual Bandits and Reinforcement Learning.
Ray is a unified framework for scaling AI and Python applications. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.
A high-performance distributed training framework for Reinforcement Learning
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot