Top Related Projects
A toolkit for developing and comparing reinforcement learning algorithms.
A fork of OpenAI Baselines, implementations of reinforcement learning algorithms
TF-Agents: A reliable, scalable and easy to use TensorFlow library for Contextual Bandits and Reinforcement Learning.
A library of reinforcement learning components and agents
An API standard for single-agent reinforcement learning environments, with popular reference environments and related utilities (formerly Gym)
Ray is a unified framework for scaling AI and Python applications. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.
Quick Overview
PARL is a flexible and high-performance reinforcement learning framework developed by PaddlePaddle. It supports various deep reinforcement learning algorithms and is designed to be easily extensible for both research and industrial applications. PARL aims to provide a unified interface for different reinforcement learning algorithms and environments.
Pros
- Supports multiple reinforcement learning algorithms (DQN, PPO, DDPG, etc.)
- Highly scalable, allowing for parallel training across multiple CPUs and GPUs
- Integrates well with PaddlePaddle ecosystem for deep learning tasks
- Provides comprehensive documentation and examples for easy adoption
Cons
- Primarily focused on PaddlePaddle, which may limit integration with other deep learning frameworks
- Less popular compared to some other reinforcement learning libraries like OpenAI Gym or Stable Baselines
- May have a steeper learning curve for those not familiar with PaddlePaddle
Code Examples
- Creating a simple DQN agent:
import parl
from parl import layers
class DQNModel(parl.Model):
def __init__(self, act_dim):
hid1_size = 128
hid2_size = 128
self.fc1 = layers.fc(size=hid1_size, act='relu')
self.fc2 = layers.fc(size=hid2_size, act='relu')
self.fc3 = layers.fc(size=act_dim)
def forward(self, obs):
h1 = self.fc1(obs)
h2 = self.fc2(h1)
Q = self.fc3(h2)
return Q
- Defining a simple policy gradient algorithm:
import parl
from parl import layers
class PolicyGradient(parl.Algorithm):
def __init__(self, model, lr=None):
self.model = model
assert isinstance(lr, float)
self.lr = lr
def predict(self, obs):
return self.model(obs)
def learn(self, obs, action, reward):
act_prob = self.model(obs)
log_prob = layers.cross_entropy(act_prob, action)
cost = log_prob * reward
cost = layers.reduce_mean(cost)
optimizer = paddle.optimizer.Adam(learning_rate=self.lr)
optimizer.minimize(cost)
return cost
- Training loop example:
for episode in range(num_episodes):
obs = env.reset()
episode_reward = 0
while True:
action = agent.sample(obs)
next_obs, reward, done, _ = env.step(action)
agent.learn(obs, action, reward, next_obs, done)
obs = next_obs
episode_reward += reward
if done:
break
print(f"Episode {episode}: Reward = {episode_reward}")
Getting Started
To get started with PARL:
- Install PARL:
pip install parl
- Import necessary modules:
import parl
import gym
import numpy as np
- Create an environment and define your model, algorithm, and agent:
env = gym.make('CartPole-v0')
model = YourModel(act_dim=env.action_space.n)
algorithm = YourAlgorithm(model)
agent = parl.Agent(algorithm)
- Start training your agent using the environment and the training loop from the code examples above.
Competitor Comparisons
A toolkit for developing and comparing reinforcement learning algorithms.
Pros of gym
- Widely adopted and supported by the RL community
- Extensive documentation and tutorials available
- Large variety of pre-built environments for testing RL algorithms
Cons of gym
- Limited to reinforcement learning tasks
- Less focus on distributed training and deployment
- Fewer built-in algorithms compared to PARL
Code Comparison
gym:
import gym
env = gym.make('CartPole-v1')
observation, info = env.reset(seed=42)
for _ in range(1000):
action = env.action_space.sample()
observation, reward, terminated, truncated, info = env.step(action)
PARL:
import parl
import gym
env = gym.make('CartPole-v1')
model = parl.algorithms.DQN(act_dim=env.action_space.n, update_target_steps=200)
agent = parl.agents.DQNAgent(model, algorithm)
for episode in range(1000):
obs = env.reset()
action = agent.sample(obs)
The code comparison shows that gym focuses on environment interaction, while PARL provides higher-level abstractions for implementing RL algorithms. PARL integrates with gym environments but adds additional layers for model definition and agent behavior.
A fork of OpenAI Baselines, implementations of reinforcement learning algorithms
Pros of stable-baselines
- More extensive documentation and tutorials
- Wider range of implemented algorithms
- Active community and frequent updates
Cons of stable-baselines
- Limited support for distributed training
- Less flexibility in customizing neural network architectures
Code Comparison
PARL example:
import parl
from parl import layers
class Model(parl.Model):
def __init__(self, act_dim):
self.fc1 = layers.fc(size=128, act='relu')
self.fc2 = layers.fc(size=act_dim)
stable-baselines example:
from stable_baselines3 import PPO
from stable_baselines3.common.policies import MlpPolicy
model = PPO(MlpPolicy, env, verbose=1)
model.learn(total_timesteps=10000)
PARL focuses on a more modular approach, allowing users to define custom models and algorithms. stable-baselines provides a higher-level API for quick implementation of popular algorithms.
Both libraries offer robust reinforcement learning capabilities, but PARL excels in distributed training and customization, while stable-baselines provides a more user-friendly experience with a wider range of pre-implemented algorithms.
TF-Agents: A reliable, scalable and easy to use TensorFlow library for Contextual Bandits and Reinforcement Learning.
Pros of agents
- Extensive documentation and tutorials
- Wider community support and more frequent updates
- Better integration with TensorFlow ecosystem
Cons of agents
- Steeper learning curve for beginners
- More complex API structure
- Potentially slower execution compared to PARL
Code Comparison
PARL example:
import parl
from parl import layers
class Model(parl.Model):
def __init__(self):
self.fc1 = layers.fc(size=100, act='relu')
self.fc2 = layers.fc(size=1)
agents example:
import tensorflow as tf
from tf_agents.networks import network
class MyNetwork(network.Network):
def __init__(self, observation_spec, action_spec, name='MyNetwork'):
super(MyNetwork, self).__init__(
input_tensor_spec=observation_spec,
state_spec=(),
name=name)
self.dense1 = tf.keras.layers.Dense(100, activation='relu')
self.dense2 = tf.keras.layers.Dense(1)
Both PARL and agents offer robust reinforcement learning frameworks, but agents provides more comprehensive documentation and better integration with the TensorFlow ecosystem. However, PARL may be easier for beginners to grasp and potentially offers faster execution. The code examples demonstrate the different approaches to defining neural network models in each framework.
A library of reinforcement learning components and agents
Pros of acme
- More comprehensive and flexible framework for RL research
- Better documentation and examples
- Stronger integration with JAX for high-performance computing
Cons of acme
- Steeper learning curve for beginners
- Less focus on industrial applications
- Potentially more complex setup and configuration
Code Comparison
PARL example:
import parl
import paddle
class Model(parl.Model):
def __init__(self):
self.fc = paddle.nn.Linear(4, 2)
def forward(self, obs):
return self.fc(obs)
acme example:
import acme
import jax.numpy as jnp
class Network(acme.networks.Base):
def __init__(self):
self.layer = hk.Linear(2)
def __call__(self, inputs):
return self.layer(inputs)
Both frameworks provide abstractions for building RL models, but acme's approach is more flexible and integrates better with JAX. PARL's syntax is more straightforward for those familiar with PaddlePaddle, while acme offers more advanced features for researchers.
PARL is better suited for industrial applications and beginners, with a focus on ease of use and deployment. acme, on the other hand, excels in research environments, offering more tools and flexibility for complex RL experiments.
An API standard for single-agent reinforcement learning environments, with popular reference environments and related utilities (formerly Gym)
Pros of Gymnasium
- More extensive and diverse set of environments for reinforcement learning
- Better documentation and community support
- Actively maintained and updated with new features and improvements
Cons of Gymnasium
- Steeper learning curve for beginners compared to PARL
- Less integrated with deep learning frameworks, requiring additional setup
Code Comparison
PARL example:
import parl
import gym
env = gym.make('CartPole-v0')
obs_dim = env.observation_space.shape[0]
act_dim = env.action_space.n
model = parl.Model(obs_dim, act_dim)
algorithm = parl.algorithms.DQN(model, lr=1e-3)
agent = parl.Agent(algorithm)
Gymnasium example:
import gymnasium as gym
from stable_baselines3 import DQN
env = gym.make('CartPole-v1')
model = DQN('MlpPolicy', env, verbose=1)
model.learn(total_timesteps=10000)
Both PARL and Gymnasium provide frameworks for reinforcement learning, but they have different focuses and strengths. PARL is more tightly integrated with PaddlePaddle and offers a simpler API for beginners, while Gymnasium provides a wider range of environments and is more flexible in terms of algorithm implementation.
Ray is a unified framework for scaling AI and Python applications. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.
Pros of Ray
- More extensive ecosystem with libraries for distributed computing, machine learning, and reinforcement learning
- Better support for distributed and parallel computing across multiple machines
- Larger community and more frequent updates
Cons of Ray
- Steeper learning curve due to its broader scope and more complex architecture
- Potentially overkill for simpler reinforcement learning tasks
- Less focused on reinforcement learning compared to PARL
Code Comparison
PARL example:
import parl
from parl import layers
class Model(parl.Model):
def __init__(self):
self.fc = layers.fc(size=1)
Ray example:
import ray
from ray import tune
@ray.remote
class Actor:
def __init__(self):
self.value = 0
Both frameworks provide abstractions for distributed computing and reinforcement learning, but Ray offers a more general-purpose approach, while PARL is more focused on reinforcement learning tasks. Ray's code tends to be more explicit about distributed aspects, while PARL's API is more tailored to RL-specific concepts.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
English | ç®ä½ä¸æ
PARL is a flexible and high-efficient reinforcement learning framework.
About PARL
Features
Reproducible. We provide algorithms that stably reproduce the result of many influential reinforcement learning algorithms.
Large Scale. Ability to support high-performance parallelization of training with thousands of CPUs and multi-GPUs.
Reusable. Algorithms provided in the repository could be directly adapted to a new task by defining a forward network and training mechanism will be built automatically.
Extensible. Build new algorithms quickly by inheriting the abstract class in the framework.
Abstractions
PARL aims to build an agent for training algorithms to perform complex tasks. The main abstractions introduced by PARL that are used to build an agent recursively are the following:Model
Model
is abstracted to construct the forward network which defines a policy network or critic network given state as input.
Algorithm
Algorithm
describes the mechanism to update parameters in Model
and often contains at least one model.
Agent
Agent
, a data bridge between the environment and the algorithm, is responsible for data I/O with the outside environment and describes data preprocessing before feeding data into the training process.
Note: For more information about base classes, please visit our tutorial and API documentation.
Parallelization
PARL provides a compact API for distributed training, allowing users to transfer the code into a parallelized version by simply adding a decorator. For more information about our APIs for parallel training, please visit our documentation.
Here is a Hello World
example to demonstrate how easy it is to leverage outer computation resources.
#============Agent.py=================
@parl.remote_class
class Agent(object):
def say_hello(self):
print("Hello World!")
def sum(self, a, b):
return a+b
parl.connect('localhost:8037')
agent = Agent()
agent.say_hello()
ans = agent.sum(1,5) # it runs remotely, without consuming any local computation resources
Two steps to use outer computation resources:
- use the
parl.remote_class
to decorate a class at first, after which it is transferred to be a new class that can run in other CPUs or machines. - call
parl.connect
to initialize parallel communication before creating an object. Calling any function of the objects does not consume local computation resources since they are executed elsewhere.
For users, they can write code in a simple way, just like writing multi-thread code, but with actors consuming remote resources. We have also provided examples of parallized algorithms like IMPALA, A2C. For more details in usage please refer to these examples.
Install:
Dependencies
- Python 3.6+(Python 3.8+ is preferable for distributed training).
- paddlepaddle>=2.3.1 (Optional, if you only want to use APIs related to parallelization alone)
pip install parl
Getting Started
Several-points to get you started:
- Tutorial : How to solve cartpole problem.
- Xparl Usage : How to set up a cluster with
xparl
and compute in parallel. - Advanced Tutorial : Create customized algorithms.
- API documentation
For beginners who know little about reinforcement learning, we also provide an introductory course: ( Video | Code )
Examples
- QuickStart
- DQN
- ES
- DDPG
- A2C
- TD3
- SAC
- QMIX
- MADDPG
- PPO
- CQL
- IMPALA
- Winning Solution for NIPS2018: AI for Prosthetics Challenge
- Winning Solution for NIPS2019: Learn to Move Challenge
- Winning Solution for NIPS2020: Learning to Run a Power Network Challenge
Top Related Projects
A toolkit for developing and comparing reinforcement learning algorithms.
A fork of OpenAI Baselines, implementations of reinforcement learning algorithms
TF-Agents: A reliable, scalable and easy to use TensorFlow library for Contextual Bandits and Reinforcement Learning.
A library of reinforcement learning components and agents
An API standard for single-agent reinforcement learning environments, with popular reference environments and related utilities (formerly Gym)
Ray is a unified framework for scaling AI and Python applications. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot