Convert Figma logo to code with AI

Farama-Foundation logoPettingZoo

An API standard for multi-agent reinforcement learning environments, with popular reference environments and related utilities

2,546
405
2,546
24

Top Related Projects

34,643

A toolkit for developing and comparing reinforcement learning algorithms.

17,044

The Unity Machine Learning Agents Toolkit (ML-Agents) is an open-source project that enables games and simulations to serve as environments for training intelligent agents using deep reinforcement learning and imitation learning.

OpenSpiel is a collection of environments and algorithms for research in general reinforcement learning and search/planning in games.

Check out the new game server:

Environment generation code for the paper "Emergent Tool Use From Multi-Agent Autocurricula"

9,620

FinRL: Financial Reinforcement Learning. 🔥

Quick Overview

PettingZoo is a Python library for conducting research in multi-agent reinforcement learning. It provides a wide variety of environments with a simple interface, similar to OpenAI's Gym library but extended for multi-agent scenarios. PettingZoo supports various types of environments, including classic games, Atari games, and more complex simulations.

Pros

  • Extensive collection of multi-agent environments
  • Consistent API across different environment types
  • Compatibility with popular reinforcement learning libraries
  • Well-documented with examples and tutorials

Cons

  • Learning curve for users new to multi-agent reinforcement learning
  • Some environments may require additional dependencies
  • Performance can vary depending on the complexity of the environment
  • Limited support for custom environment creation

Code Examples

  1. Importing and initializing an environment:
from pettingzoo.classic import tictactoe_v3

env = tictactoe_v3.env()
env.reset()
  1. Stepping through an environment:
for agent in env.agent_iter():
    observation, reward, termination, truncation, info = env.last()
    if termination or truncation:
        action = None
    else:
        action = env.action_space(agent).sample()  # This is where you would insert your agent's policy
    env.step(action)
  1. Rendering an environment:
env = tictactoe_v3.env(render_mode="human")
env.reset()
for _ in range(100):
    env.render()
    env.step(env.action_space(env.agent_selection).sample())
env.close()

Getting Started

To get started with PettingZoo, follow these steps:

  1. Install PettingZoo:
pip install pettingzoo
  1. Import and use an environment:
from pettingzoo.classic import chess_v5

env = chess_v5.env()
env.reset()

for agent in env.agent_iter():
    observation, reward, termination, truncation, info = env.last()
    if termination or truncation:
        action = None
    else:
        action = env.action_space(agent).sample()
    env.step(action)

This example sets up a chess environment and runs a random agent. Replace the action = env.action_space(agent).sample() line with your own agent's policy to implement custom behavior.

Competitor Comparisons

34,643

A toolkit for developing and comparing reinforcement learning algorithms.

Pros of Gym

  • Widely adopted and well-established in the reinforcement learning community
  • Extensive documentation and tutorials available
  • Simpler API for single-agent environments

Cons of Gym

  • Limited support for multi-agent environments
  • Less active development and maintenance compared to PettingZoo
  • Fewer built-in environments, especially for complex scenarios

Code Comparison

Gym:

import gym
env = gym.make("CartPole-v1")
observation, info = env.reset(seed=42)
for _ in range(1000):
    action = env.action_space.sample()
    observation, reward, terminated, truncated, info = env.step(action)

PettingZoo:

from pettingzoo.butterfly import pistonball_v6
env = pistonball_v6.env()
env.reset()
for agent in env.agent_iter():
    observation, reward, termination, truncation, info = env.last()
    action = env.action_space(agent).sample()
    env.step(action)

The main difference is that PettingZoo uses an agent iteration loop, allowing for multi-agent environments, while Gym focuses on single-agent scenarios with a simpler API. PettingZoo offers more flexibility for complex, multi-agent reinforcement learning tasks, but may require more setup for basic environments.

17,044

The Unity Machine Learning Agents Toolkit (ML-Agents) is an open-source project that enables games and simulations to serve as environments for training intelligent agents using deep reinforcement learning and imitation learning.

Pros of ml-agents

  • Seamless integration with Unity game engine, allowing for complex 3D environments
  • Comprehensive toolkit with pre-built components for reinforcement learning
  • Strong support for multi-agent scenarios and imitation learning

Cons of ml-agents

  • Limited to Unity environment, less flexible for non-Unity projects
  • Steeper learning curve for those unfamiliar with Unity
  • Potentially higher computational requirements for complex 3D simulations

Code Comparison

ml-agents:

from mlagents_envs.environment import UnityEnvironment
from mlagents_envs.side_channel.engine_configuration_channel import EngineConfigurationChannel

channel = EngineConfigurationChannel()
env = UnityEnvironment(file_name="MyEnvironment", side_channels=[channel])

PettingZoo:

from pettingzoo.butterfly import pistonball_v6

env = pistonball_v6.env()
env.reset()
for agent in env.agent_iter():
    observation, reward, done, info = env.last()

The code snippets demonstrate the initialization process for each library. ml-agents focuses on setting up a Unity environment, while PettingZoo offers a more straightforward approach to creating and interacting with multi-agent environments across various domains.

OpenSpiel is a collection of environments and algorithms for research in general reinforcement learning and search/planning in games.

Pros of OpenSpiel

  • Extensive library of game implementations, including classical board games and poker variants
  • Strong focus on game theory and multi-agent reinforcement learning
  • Comprehensive documentation and research-oriented design

Cons of OpenSpiel

  • Steeper learning curve for beginners due to its research-oriented nature
  • Less emphasis on modern video game environments compared to PettingZoo
  • Primarily C++ based, which may be less accessible for Python-focused developers

Code Comparison

OpenSpiel (C++):

#include "open_spiel/spiel.h"
#include "open_spiel/games/tic_tac_toe.h"

std::unique_ptr<open_spiel::Game> game = open_spiel::LoadGame("tic_tac_toe");
std::unique_ptr<open_spiel::State> state = game->NewInitialState();

PettingZoo (Python):

from pettingzoo.classic import tictactoe_v3

env = tictactoe_v3.env()
env.reset()

Both libraries offer easy-to-use interfaces for creating game environments, but OpenSpiel's C++ implementation may require more setup compared to PettingZoo's Python-based approach.

Check out the new game server:

Pros of Football

  • Focused on a specific domain (soccer/football), providing a deep and realistic simulation
  • Includes a built-in game engine with 3D rendering capabilities
  • Offers pre-trained models and baselines for benchmarking

Cons of Football

  • Limited to a single game environment, reducing versatility for general reinforcement learning research
  • Steeper learning curve due to the complexity of the soccer simulation
  • Requires more computational resources for 3D rendering and physics simulation

Code Comparison

PettingZoo example:

from pettingzoo.butterfly import knights_archers_zombies_v10
env = knights_archers_zombies_v10.env()
env.reset()
for agent in env.agent_iter():
    observation, reward, done, info = env.last()
    action = env.action_space(agent).sample()
    env.step(action)

Football example:

from gfootball.env import create_environment
env = create_environment(env_name="academy_empty_goal_close", stacked=False)
obs = env.reset()
action = env.action_space.sample()
obs, reward, done, info = env.step(action)

Both repositories provide reinforcement learning environments, but PettingZoo offers a wider range of multi-agent scenarios across various domains, while Football focuses exclusively on soccer simulations with higher fidelity and domain-specific features.

Environment generation code for the paper "Emergent Tool Use From Multi-Agent Autocurricula"

Pros of multi-agent-emergence-environments

  • Focuses on emergent behavior and complex multi-agent interactions
  • Provides environments specifically designed for studying collective intelligence
  • Includes advanced scenarios like hide-and-seek and tool use

Cons of multi-agent-emergence-environments

  • Limited variety of environments compared to PettingZoo
  • Less active development and community support
  • Narrower scope, primarily focused on OpenAI's research interests

Code Comparison

PettingZoo environment initialization:

from pettingzoo.butterfly import knights_archers_zombies_v10
env = knights_archers_zombies_v10.env()
env.reset()

multi-agent-emergence-environments initialization:

from mujoco_worldgen import Floor, WorldBuilder
from mae_envs.wrappers.multi_agent import MultiAgent
env = Floor(WorldBuilder())
env = MultiAgent(env)

Both libraries provide multi-agent environments, but PettingZoo offers a wider range of scenarios and a more standardized API. multi-agent-emergence-environments excels in complex, emergent behavior studies but has a narrower focus and less extensive documentation.

9,620

FinRL: Financial Reinforcement Learning. 🔥

Pros of FinRL

  • Specialized for financial reinforcement learning tasks
  • Includes pre-built environments for various financial markets
  • Provides comprehensive documentation and tutorials for financial RL applications

Cons of FinRL

  • More limited in scope compared to PettingZoo's diverse environments
  • Less active community and fewer contributors
  • Steeper learning curve for users not familiar with financial concepts

Code Comparison

FinRL example:

from finrl.apps import crypto_trading
from finrl.finrl_meta import data_processor as dp

df = dp.download_data(start_date = '2019-09-12', end_date = '2021-05-31', ticker_list = ['BTC/USDT'])
env = crypto_trading.CryptoEnv(df=df)

PettingZoo example:

from pettingzoo.butterfly import knights_archers_zombies_v10

env = knights_archers_zombies_v10.env()
env.reset()
for agent in env.agent_iter():
    observation, reward, done, info = env.last()
    action = env.action_space(agent).sample()
    env.step(action)

Both libraries offer easy-to-use APIs for creating and interacting with environments, but FinRL focuses on financial scenarios while PettingZoo provides a wider range of multi-agent environments across various domains.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

pre-commit Code style: black

PettingZoo is a Python library for conducting research in multi-agent reinforcement learning, akin to a multi-agent version of Gymnasium.

The documentation website is at pettingzoo.farama.org and we have a public discord server (which we also use to coordinate development work) that you can join here: https://discord.gg/nhvKkYa6qX

Environments

PettingZoo includes the following families of environments:

Installation

To install the base PettingZoo library: pip install pettingzoo.

This does not include dependencies for all families of environments (some environments can be problematic to install on certain systems).

To install the dependencies for one family, use pip install 'pettingzoo[atari]', or use pip install 'pettingzoo[all]' to install all dependencies.

We support Python 3.8, 3.9, 3.10 and 3.11 on Linux and macOS. We will accept PRs related to Windows, but do not officially support it.

Note: Some Linux distributions may require manual installation of cmake, swig, or zlib1g-dev (e.g., sudo apt install cmake swig zlib1g-dev)

Getting started

For an introduction to PettingZoo, see Basic Usage. To create a new environment, see our Environment Creation Tutorial and Custom Environment Examples. For examples of training RL models using PettingZoo see our tutorials:

API

PettingZoo model environments as Agent Environment Cycle (AEC) games, in order to be able to cleanly support all types of multi-agent RL environments under one API and to minimize the potential for certain classes of common bugs.

Using environments in PettingZoo is very similar to Gymnasium, i.e. you initialize an environment via:

from pettingzoo.butterfly import pistonball_v6
env = pistonball_v6.env()

Environments can be interacted with in a manner very similar to Gymnasium:

env.reset()
for agent in env.agent_iter():
    observation, reward, termination, truncation, info = env.last()
    action = None if termination or truncation else env.action_space(agent).sample()  # this is where you would insert your policy
    env.step(action)

For the complete API documentation, please see https://pettingzoo.farama.org/api/aec/

Parallel API

In certain environments, it's a valid to assume that agents take their actions at the same time. For these games, we offer a secondary API to allow for parallel actions, documented at https://pettingzoo.farama.org/api/parallel/

SuperSuit

SuperSuit is a library that includes all commonly used wrappers in RL (frame stacking, observation, normalization, etc.) for PettingZoo and Gymnasium environments with a nice API. We developed it in lieu of wrappers built into PettingZoo. https://github.com/Farama-Foundation/SuperSuit

Environment Versioning

PettingZoo keeps strict versioning for reproducibility reasons. All environments end in a suffix like "_v0". When changes are made to environments that might impact learning results, the number is increased by one to prevent potential confusion.

Project Maintainers

Project Manager: Elliot Tower

Maintenance for this project is also contributed by the broader Farama team: farama.org/team.

Citation

To cite this project in publication, please use

@article{terry2021pettingzoo,
  title={Pettingzoo: Gym for multi-agent reinforcement learning},
  author={Terry, J and Black, Benjamin and Grammel, Nathaniel and Jayakumar, Mario and Hari, Ananth and Sullivan, Ryan and Santos, Luis S and Dieffendahl, Clemens and Horsch, Caroline and Perez-Vicente, Rodrigo and others},
  journal={Advances in Neural Information Processing Systems},
  volume={34},
  pages={15032--15043},
  year={2021}
}