agents
TF-Agents: A reliable, scalable and easy to use TensorFlow library for Contextual Bandits and Reinforcement Learning.
Top Related Projects
Dopamine is a research framework for fast prototyping of reinforcement learning algorithms.
OpenAI Baselines: high-quality implementations of reinforcement learning algorithms
A fork of OpenAI Baselines, implementations of reinforcement learning algorithms
A library of reinforcement learning components and agents
PFRL: a PyTorch-based deep reinforcement learning library
A toolkit for reproducible reinforcement learning research.
Quick Overview
TensorFlow Agents is an open-source library for reinforcement learning in TensorFlow. It provides a collection of RL algorithms, environments, and tools to facilitate research and development in the field of reinforcement learning. The library is designed to work seamlessly with TensorFlow 2.x and offers a high-level API for easy implementation of RL agents.
Pros
- Comprehensive collection of RL algorithms and environments
- Seamless integration with TensorFlow 2.x ecosystem
- Well-documented and actively maintained
- Supports both eager execution and graph mode
Cons
- Steeper learning curve for those new to TensorFlow
- Limited support for some advanced RL techniques
- Performance may be slower compared to specialized RL libraries
- Dependency on TensorFlow ecosystem may limit portability
Code Examples
- Creating a DQN agent:
import tensorflow as tf
from tf_agents.agents.dqn import dqn_agent
from tf_agents.networks import q_network
from tf_agents.environments import tf_py_environment
from tf_agents.environments import suite_gym
# Create the environment
env = suite_gym.load('CartPole-v1')
tf_env = tf_py_environment.TFPyEnvironment(env)
# Define the Q-network
q_net = q_network.QNetwork(
tf_env.observation_spec(),
tf_env.action_spec(),
fc_layer_params=(100,)
)
# Create the DQN agent
agent = dqn_agent.DqnAgent(
tf_env.time_step_spec(),
tf_env.action_spec(),
q_network=q_net,
optimizer=tf.compat.v1.train.AdamOptimizer(learning_rate=1e-3)
)
agent.initialize()
- Training the agent:
from tf_agents.drivers import dynamic_step_driver
from tf_agents.replay_buffers import tf_uniform_replay_buffer
# Create a replay buffer
replay_buffer = tf_uniform_replay_buffer.TFUniformReplayBuffer(
data_spec=agent.collect_data_spec,
batch_size=tf_env.batch_size,
max_length=100000
)
# Define the driver
driver = dynamic_step_driver.DynamicStepDriver(
tf_env,
agent.collect_policy,
observers=[replay_buffer.add_batch],
num_steps=1
)
# Training loop
for _ in range(1000):
driver.run()
experience = replay_buffer.gather_all()
agent.train(experience)
replay_buffer.clear()
- Evaluating the agent:
from tf_agents.metrics import tf_metrics
from tf_agents.drivers import dynamic_episode_driver
# Create metrics
avg_return = tf_metrics.AverageReturnMetric()
num_episodes = tf_metrics.NumberOfEpisodes()
# Create an evaluation driver
eval_driver = dynamic_episode_driver.DynamicEpisodeDriver(
tf_env,
agent.policy,
observers=[avg_return, num_episodes],
num_episodes=10
)
# Run evaluation
eval_driver.run()
print(f"Average Return: {avg_return.result().numpy()}")
print(f"Number of Episodes: {num_episodes.result().numpy()}")
Getting Started
To get started with TensorFlow Agents, follow these steps:
-
Install TensorFlow Agents:
pip install tf-agents
-
Import necessary modules:
import tensorflow as tf from tf_agents.environments import suite_gym from tf_agents.agents.dqn import dqn_agent from tf_agents.networks import q_network from tf_agents.replay_buffers import tf_uniform_replay_buffer from tf_agents.trajectories import trajectory from tf_agents.utils import common
-
Create an environment, define a network, and instantiate an agent:
env
Competitor Comparisons
Dopamine is a research framework for fast prototyping of reinforcement learning algorithms.
Pros of Dopamine
- Simpler and more focused on specific RL algorithms (DQN, Rainbow, C51)
- Easier to get started with for beginners in reinforcement learning
- Designed for reproducibility and easy experimentation
Cons of Dopamine
- Less comprehensive than Agents, with fewer algorithms and features
- Limited to Atari environments, while Agents supports a wider range of tasks
- Less active development and community support compared to Agents
Code Comparison
Dopamine (training a DQN agent):
runner = Runner(base_dir, create_agent_fn)
runner.run_experiment()
Agents (training a DQN agent):
agent = dqn_agent.DqnAgent(
time_step_spec,
action_spec,
q_network=q_net,
optimizer=optimizer)
collect_driver.run()
Both libraries offer concise ways to train RL agents, but Dopamine's API is slightly more abstracted and easier to use out of the box. Agents provides more flexibility and control over the training process, which can be beneficial for advanced users and researchers.
OpenAI Baselines: high-quality implementations of reinforcement learning algorithms
Pros of Baselines
- Wider range of implemented algorithms, including PPO, TRPO, and DDPG
- More extensive documentation and examples for various environments
- Easier to use for beginners due to simpler API and setup process
Cons of Baselines
- Less active development and maintenance compared to Agents
- Limited integration with TensorFlow ecosystem
- Fewer options for customization and extension of algorithms
Code Comparison
Baselines (PPO implementation):
from baselines.ppo2 import ppo2
from baselines.common.vec_env import DummyVecEnv
env = DummyVecEnv([lambda: gym.make("CartPole-v1")])
model = ppo2.learn(env=env, total_timesteps=10000)
Agents (PPO implementation):
from tf_agents.agents.ppo import ppo_agent
from tf_agents.environments import suite_gym
env = suite_gym.load("CartPole-v1")
agent = ppo_agent.PPOAgent(
time_step_spec=env.time_step_spec(),
action_spec=env.action_spec(),
optimizer=tf.compat.v1.train.AdamOptimizer(learning_rate=1e-3),
)
Both repositories offer robust implementations of reinforcement learning algorithms, but they cater to different user needs. Baselines provides a more accessible entry point for beginners and a wider range of algorithms, while Agents offers deeper integration with TensorFlow and more flexibility for advanced users.
A fork of OpenAI Baselines, implementations of reinforcement learning algorithms
Pros of Stable-baselines
- Easier to use and more beginner-friendly
- Provides a unified interface for various RL algorithms
- Well-documented with extensive examples and tutorials
Cons of Stable-baselines
- Less flexible for advanced customization
- Fewer cutting-edge algorithms compared to TF-Agents
- Limited support for distributed training
Code Comparison
Stable-baselines:
from stable_baselines3 import PPO
model = PPO("MlpPolicy", "CartPole-v1", verbose=1)
model.learn(total_timesteps=10000)
TF-Agents:
from tf_agents.agents.ppo import ppo_agent
from tf_agents.networks import actor_distribution_network, value_network
actor_net = actor_distribution_network.ActorDistributionNetwork(
obs_spec, action_spec, fc_layer_params=(200, 100))
value_net = value_network.ValueNetwork(obs_spec, fc_layer_params=(200, 100))
agent = ppo_agent.PPOAgent(
time_step_spec, action_spec, actor_net, value_net,
optimizer=tf.compat.v1.train.AdamOptimizer(learning_rate=1e-3))
Both libraries offer implementations of popular RL algorithms, but TF-Agents provides more granular control over agent components and training processes. Stable-baselines focuses on simplicity and ease of use, making it more accessible for beginners and rapid prototyping. TF-Agents, being part of the TensorFlow ecosystem, offers better integration with other TensorFlow tools and more advanced features for researchers and experienced practitioners.
A library of reinforcement learning components and agents
Pros of Acme
- More flexible and modular architecture, allowing easier customization of agents and environments
- Supports both TensorFlow and JAX, offering greater flexibility in backend choice
- Provides a wider range of pre-implemented algorithms, including more recent advancements in RL
Cons of Acme
- Steeper learning curve due to its more complex architecture
- Less integrated with TensorFlow ecosystem compared to Agents
- Documentation may be less comprehensive for beginners
Code Comparison
Agents example:
agent = dqn_agent.DqnAgent(
time_step_spec,
action_spec,
q_network=q_net,
optimizer=optimizer,
td_errors_loss_fn=common.element_wise_squared_loss,
train_step_counter=train_step_counter)
Acme example:
agent = dqn.DQN(
environment_spec=environment_spec,
network=network,
batch_size=batch_size,
samples_per_insert=samples_per_insert,
min_replay_size=min_replay_size)
Both repositories offer robust implementations of reinforcement learning algorithms, but Acme provides more flexibility and a wider range of algorithms, while Agents offers tighter integration with the TensorFlow ecosystem and potentially easier onboarding for beginners.
PFRL: a PyTorch-based deep reinforcement learning library
Pros of PFRL
- Built on PyTorch, offering dynamic computation graphs and easier debugging
- Extensive collection of implemented algorithms, including advanced options like SAC and TD3
- Flexible and modular design, allowing easy customization of components
Cons of PFRL
- Smaller community and less extensive documentation compared to TensorFlow Agents
- Fewer integrations with external tools and environments
- May have a steeper learning curve for those familiar with TensorFlow ecosystem
Code Comparison
PFRL (PyTorch):
import torch
import pfrl
q_func = torch.nn.Sequential(
torch.nn.Linear(obs_size, 64),
torch.nn.ReLU(),
torch.nn.Linear(64, n_actions)
)
optimizer = torch.optim.Adam(q_func.parameters())
explorer = pfrl.explorers.ConstantEpsilonGreedy(epsilon=0.3, random_action_func=env.action_space.sample)
TensorFlow Agents:
import tensorflow as tf
from tf_agents.networks import q_network
from tf_agents.agents.dqn import dqn_agent
q_net = q_network.QNetwork(
input_tensor_spec,
action_spec,
fc_layer_params=(64,)
)
optimizer = tf.compat.v1.train.AdamOptimizer(learning_rate=1e-3)
agent = dqn_agent.DqnAgent(
time_step_spec,
action_spec,
q_network=q_net,
optimizer=optimizer
)
A toolkit for reproducible reinforcement learning research.
Pros of garage
- More flexible and modular architecture, allowing easier customization of algorithms
- Supports multiple deep learning frameworks (TensorFlow, PyTorch, and Theano)
- Extensive documentation and tutorials for beginners
Cons of garage
- Smaller community and less frequent updates compared to agents
- Fewer pre-implemented algorithms and environments
- May require more setup and configuration for complex tasks
Code Comparison
garage example:
from garage import wrap_experiment
from garage.tf.algos import PPO
from garage.tf.policies import GaussianMLPPolicy
@wrap_experiment
def ppo_experiment(ctxt=None):
policy = GaussianMLPPolicy(env_spec=env.spec)
algo = PPO(env_spec=env.spec, policy=policy)
trainer.setup(algo, env)
trainer.train()
agents example:
import tensorflow as tf
from tf_agents.agents.ppo import ppo_agent
from tf_agents.networks import actor_distribution_network
actor_net = actor_distribution_network.ActorDistributionNetwork(
input_tensor_spec, output_tensor_spec)
agent = ppo_agent.PPOAgent(
time_step_spec, action_spec, actor_net=actor_net)
Both repositories offer reinforcement learning frameworks, but garage provides more flexibility and support for multiple deep learning libraries, while agents is more tightly integrated with TensorFlow and offers a larger selection of pre-implemented algorithms.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
TF-Agents: A reliable, scalable and easy to use TensorFlow library for Contextual Bandits and Reinforcement Learning.
TF-Agents makes implementing, deploying, and testing new Bandits and RL algorithms easier. It provides well tested and modular components that can be modified and extended. It enables fast code iteration, with good test integration and benchmarking.
To get started, we recommend checking out one of our Colab tutorials. If you need an intro to RL (or a quick recap), start here. Otherwise, check out our DQN tutorial to get an agent up and running in the Cartpole environment. API documentation for the current stable release is on tensorflow.org.
TF-Agents is under active development and interfaces may change at any time. Feedback and comments are welcome.
Table of contents
Agents
Tutorials
Multi-Armed Bandits
Examples
Installation
Contributing
Releases
Principles
Contributors
Citation
Disclaimer
Agents
In TF-Agents, the core elements of RL algorithms are implemented as Agents
. An
agent encompasses two main responsibilities: defining a Policy to interact with
the Environment, and how to learn/train that Policy from collected experience.
Currently the following algorithms are available under TF-Agents:
- DQN: Human level control through deep reinforcement learning Mnih et al., 2015
- DDQN: Deep Reinforcement Learning with Double Q-learning Hasselt et al., 2015
- DDPG: Continuous control with deep reinforcement learning Lillicrap et al., 2015
- TD3: Addressing Function Approximation Error in Actor-Critic Methods Fujimoto et al., 2018
- REINFORCE: Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning Williams, 1992
- PPO: Proximal Policy Optimization Algorithms Schulman et al., 2017
- SAC: Soft Actor Critic Haarnoja et al., 2018
Tutorials
See docs/tutorials/
for tutorials on the major components
provided.
Multi-Armed Bandits
The TF-Agents library contains a comprehensive Multi-Armed Bandits suite,
including Bandits environments and agents. RL agents can also be used on Bandit
environments. There is a tutorial in
bandits_tutorial.ipynb
.
and ready-to-run examples in
tf_agents/bandits/agents/examples/v2
.
Examples
End-to-end examples training agents can be found under each agent directory. e.g.:
Installation
TF-Agents publishes nightly and stable builds. For a list of releases read the Releases section. The commands below cover installing TF-Agents stable and nightly from pypi.org as well as from a GitHub clone.
:warning: If using Reverb (replay buffer), which is very common, TF-Agents will only work with Linux.
Note: Python 3.11 requires pygame 2.1.3+.
Stable
Run the commands below to install the most recent stable release. API documentation for the release is on tensorflow.org.
$ pip install --user tf-agents[reverb]
# Use keras-2
$ export TF_USE_LEGACY_KERAS=1
# Use this tag get the matching examples and colabs.
$ git clone https://github.com/tensorflow/agents.git
$ cd agents
$ git checkout v0.18.0
If you want to install TF-Agents with versions of Tensorflow or Reverb that are flagged as not compatible by the pip dependency check, use the following pattern below at your own risk.
$ pip install --user tensorflow
$ pip install --user tf-keras
$ pip install --user dm-reverb
$ pip install --user tf-agents
If you want to use TF-Agents with TensorFlow 1.15 or 2.0, install version 0.3.0:
# Newer versions of tensorflow-probability require newer versions of TensorFlow.
$ pip install tensorflow-probability==0.8.0
$ pip install tf-agents==0.3.0
Nightly
Nightly builds include newer features, but may be less stable than the versioned
releases. The nightly build is pushed as tf-agents-nightly
. We suggest
installing nightly versions of TensorFlow (tf-nightly
) and TensorFlow
Probability (tfp-nightly
) as those are the versions TF-Agents nightly are
tested against.
To install the nightly build version, run the following:
# Use keras-2
$ export TF_USE_LEGACY_KERAS=1
# `--force-reinstall helps guarantee the right versions.
$ pip install --user --force-reinstall tf-nightly
$ pip install --user --force-reinstall tf-keras-nightly
$ pip install --user --force-reinstall tfp-nightly
$ pip install --user --force-reinstall dm-reverb-nightly
# Installing with the `--upgrade` flag ensures you'll get the latest version.
$ pip install --user --upgrade tf-agents-nightly
From GitHub
After cloning the repository, the dependencies can be installed by running pip install -e .[tests]
. TensorFlow needs to be installed independently: pip install --user tf-nightly
.
Contributing
We're eager to collaborate with you! See CONTRIBUTING.md
for a guide on how to contribute. This project adheres to TensorFlow's
code of conduct. By participating, you are expected to
uphold this code.
Releases
TF Agents has stable and nightly releases. The nightly releases are often fine but can have issues due to upstream libraries being in flux. The table below lists the version(s) of TensorFlow that align with each TF Agents' release. Release versions of interest:
- 0.19.0 supports tensorflow-2.15.0.
- 0.18.0 dropped Python 3.8 support.
- 0.16.0 is the first version to support Python 3.11.
- 0.15.0 is the last release compatible with Python 3.7.
- If using numpy < 1.19, then use TF-Agents 0.15.0 or earlier.
- 0.9.0 is the last release compatible with Python 3.6.
- 0.3.0 is the last release compatible with Python 2.x.
Release | Branch / Tag | TensorFlow Version | dm-reverb Version |
---|---|---|---|
Nightly | master | tf-nightly | dm-reverb-nightly |
0.19.0 | v0.19.0 | 2.15.0 | 0.14.0 |
0.18.0 | v0.18.0 | 2.14.0 | 0.13.0 |
0.17.0 | v0.17.0 | 2.13.0 | 0.12.0 |
0.16.0 | v0.16.0 | 2.12.0 | 0.11.0 |
0.15.0 | v0.15.0 | 2.11.0 | 0.10.0 |
0.14.0 | v0.14.0 | 2.10.0 | 0.9.0 |
0.13.0 | v0.13.0 | 2.9.0 | 0.8.0 |
0.12.0 | v0.12.0 | 2.8.0 | 0.7.0 |
0.11.0 | v0.11.0 | 2.7.0 | 0.6.0 |
0.10.0 | v0.10.0 | 2.6.0 | |
0.9.0 | v0.9.0 | 2.6.0 | |
0.8.0 | v0.8.0 | 2.5.0 | |
0.7.1 | v0.7.1 | 2.4.0 | |
0.6.0 | v0.6.0 | 2.3.0 | |
0.5.0 | v0.5.0 | 2.2.0 | |
0.4.0 | v0.4.0 | 2.1.0 | |
0.3.0 | v0.3.0 | 1.15.0 and 2.0.0. |
Principles
This project adheres to Google's AI principles. By participating, using or contributing to this project you are expected to adhere to these principles.
Contributors
We would like to recognize the following individuals for their code contributions, discussions, and other work to make the TF-Agents library.
- James Davidson
- Ethan Holly
- Toby Boyd
- Summer Yue
- Robert Ormandi
- Kuang-Huei Lee
- Alexa Greenberg
- Amir Yazdanbakhsh
- Yao Lu
- Gaurav Jain
- Christof Angermueller
- Mark Daoust
- Adam Wood
Citation
If you use this code, please cite it as:
@misc{TFAgents,
title = {{TF-Agents}: A library for Reinforcement Learning in TensorFlow},
author = {Sergio Guadarrama and Anoop Korattikara and Oscar Ramirez and
Pablo Castro and Ethan Holly and Sam Fishman and Ke Wang and
Ekaterina Gonina and Neal Wu and Efi Kokiopoulou and Luciano Sbaiz and
Jamie Smith and Gábor Bartók and Jesse Berent and Chris Harris and
Vincent Vanhoucke and Eugene Brevdo},
howpublished = {\url{https://github.com/tensorflow/agents}},
url = "https://github.com/tensorflow/agents",
year = 2018,
note = "[Online; accessed 25-June-2019]"
}
Disclaimer
This is not an official Google product.
Top Related Projects
Dopamine is a research framework for fast prototyping of reinforcement learning algorithms.
OpenAI Baselines: high-quality implementations of reinforcement learning algorithms
A fork of OpenAI Baselines, implementations of reinforcement learning algorithms
A library of reinforcement learning components and agents
PFRL: a PyTorch-based deep reinforcement learning library
A toolkit for reproducible reinforcement learning research.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot