Convert Figma logo to code with AI

oxwhirl logosmac

SMAC: The StarCraft Multi-Agent Challenge

1,069
226
1,069
18

Top Related Projects

7,991

StarCraft II Learning Environment

An API standard for multi-agent reinforcement learning environments, with popular reference environments and related utilities

MuJoCo is a physics engine for detailed, efficient rigid body simulations with contacts. mujoco-py allows using MuJoCo from Python 3.

34,643

A toolkit for developing and comparing reinforcement learning algorithms.

17,044

The Unity Machine Learning Agents Toolkit (ML-Agents) is an open-source project that enables games and simulations to serve as environments for training intelligent agents using deep reinforcement learning and imitation learning.

Check out the new game server:

Quick Overview

SMAC (StarCraft Multi-Agent Challenge) is an environment for research in the field of collaborative multi-agent reinforcement learning (MARL) based on Blizzard's StarCraft II RTS game. It offers a diverse set of challenging scenarios that require agents to learn complex cooperative behaviors.

Pros

  • Provides a standardized benchmark for MARL research
  • Offers a variety of challenging scenarios with different difficulty levels
  • Supports both state and observation-based tasks
  • Integrates well with popular RL libraries like PyMARL and RLlib

Cons

  • Requires a licensed copy of StarCraft II to run
  • Can be computationally intensive, especially for larger scenarios
  • Limited to the specific game mechanics and scenarios of StarCraft II
  • May have a steep learning curve for researchers unfamiliar with StarCraft II

Code Examples

  1. Initializing the SMAC environment:
from smac.env import StarCraft2Env

env = StarCraft2Env(map_name="3m")
env_info = env.get_env_info()
  1. Taking a step in the environment:
obs = env.reset()
actions = [0] * env_info["n_agents"]
reward, done, info = env.step(actions)
  1. Rendering the environment:
env.render()

Getting Started

To get started with SMAC:

  1. Install StarCraft II and add the game to the PATH.
  2. Install SMAC:
    pip install git+https://github.com/oxwhirl/smac.git
    
  3. Create and run a simple environment:
    from smac.env import StarCraft2Env
    
    env = StarCraft2Env(map_name="3m")
    env_info = env.get_env_info()
    
    for episode in range(10):
        obs = env.reset()
        done = False
        episode_reward = 0
        
        while not done:
            actions = [env.action_space.sample() for _ in range(env_info["n_agents"])]
            reward, done, _ = env.step(actions)
            episode_reward += reward
        
        print(f"Episode {episode + 1} reward: {episode_reward}")
    
    env.close()
    

This example creates a simple SMAC environment, runs 10 episodes with random actions, and prints the total reward for each episode.

Competitor Comparisons

7,991

StarCraft II Learning Environment

Pros of pysc2

  • More comprehensive and feature-rich, providing full access to StarCraft II's API
  • Officially supported by DeepMind, ensuring regular updates and maintenance
  • Offers a wider range of environments and scenarios for research

Cons of pysc2

  • Steeper learning curve due to its complexity and extensive features
  • Requires a full StarCraft II installation, which can be resource-intensive
  • May be overkill for simpler reinforcement learning experiments

Code Comparison

pysc2:

from pysc2.env import sc2_env
from pysc2.lib import actions

env = sc2_env.SC2Env(map_name="Simple64")
obs = env.reset()
action = actions.FunctionCall(actions.FUNCTIONS.no_op.id, [])

smac:

from smac.env import StarCraft2Env

env = StarCraft2Env(map_name="3m")
env.reset()
action = env.action_space.sample()

Summary

pysc2 offers a more comprehensive toolkit for StarCraft II research, while smac provides a simpler, more focused environment for multi-agent reinforcement learning. pysc2 is better suited for complex scenarios and full game simulations, whereas smac is ideal for quick experiments and specific multi-agent tasks.

An API standard for multi-agent reinforcement learning environments, with popular reference environments and related utilities

Pros of PettingZoo

  • Broader scope: Supports a wide variety of environments beyond just StarCraft
  • More active development: Regular updates and contributions from the community
  • Standardized API: Consistent interface across different environments

Cons of PettingZoo

  • Less specialized: May not have as deep integration with StarCraft as SMAC
  • Potentially more complex: The broader scope can make it more challenging to get started

Code Comparison

SMAC example:

from smac.env import StarCraft2Env

env = StarCraft2Env(map_name="3m")
env.reset()

PettingZoo example:

from pettingzoo.magent import battle_v3

env = battle_v3.env()
env.reset()

Both libraries provide easy-to-use interfaces for creating and interacting with environments. SMAC is more focused on StarCraft scenarios, while PettingZoo offers a wider range of environments with a standardized API. The choice between them depends on the specific requirements of your project and whether you need specialized StarCraft features or a more general-purpose multi-agent reinforcement learning framework.

MuJoCo is a physics engine for detailed, efficient rigid body simulations with contacts. mujoco-py allows using MuJoCo from Python 3.

Pros of mujoco-py

  • Focuses on continuous control tasks and physics simulation
  • Provides a Python interface to the MuJoCo physics engine
  • Widely used in reinforcement learning research

Cons of mujoco-py

  • Requires a MuJoCo license (although now free for personal use)
  • Limited to single-agent environments
  • Less suitable for multi-agent scenarios

Code Comparison

mujoco-py:

import mujoco_py
import os
model = mujoco_py.load_model_from_path("model.xml")
sim = mujoco_py.MjSim(model)
sim.step()

SMAC:

from smac.env import StarCraft2Env
env = StarCraft2Env(map_name="8m")
env.reset()
actions = [0] * env.n_agents
obs, reward, done, _ = env.step(actions)

Key Differences

  • mujoco-py is designed for continuous control and physics simulation, while SMAC focuses on multi-agent reinforcement learning in StarCraft II
  • SMAC provides a higher-level interface specific to StarCraft II scenarios, whereas mujoco-py offers a more general-purpose physics simulation environment
  • mujoco-py requires additional setup and licensing, while SMAC is more self-contained and easier to get started with for multi-agent scenarios
34,643

A toolkit for developing and comparing reinforcement learning algorithms.

Pros of Gym

  • Broader scope with a wide variety of environments, including classic control, robotics, and Atari games
  • Larger community and more extensive documentation
  • More flexible and easier to extend with custom environments

Cons of Gym

  • Less focused on multi-agent scenarios
  • May require additional wrappers or modifications for specific use cases
  • Can be overwhelming for beginners due to its extensive range of options

Code Comparison

SMAC environment setup:

from smac.env import StarCraft2Env

env = StarCraft2Env(map_name="3m")
env.reset()

Gym environment setup:

import gym

env = gym.make("CartPole-v1")
env.reset()

Summary

Gym offers a more versatile and widely-used platform for reinforcement learning, with a broader range of environments and stronger community support. However, SMAC provides a specialized focus on multi-agent scenarios in the StarCraft II domain, which may be more suitable for researchers working specifically on multi-agent reinforcement learning problems. The choice between the two depends on the specific requirements of the project and the desired level of complexity in multi-agent interactions.

17,044

The Unity Machine Learning Agents Toolkit (ML-Agents) is an open-source project that enables games and simulations to serve as environments for training intelligent agents using deep reinforcement learning and imitation learning.

Pros of ml-agents

  • Broader application scope, supporting various game types and simulations
  • More extensive documentation and tutorials for beginners
  • Active community and regular updates from Unity Technologies

Cons of ml-agents

  • Steeper learning curve due to Unity integration
  • Potentially higher resource requirements for complex environments

Code Comparison

ml-agents:

from mlagents_envs.environment import UnityEnvironment
from mlagents_envs.side_channel.engine_configuration_channel import EngineConfigurationChannel

channel = EngineConfigurationChannel()
env = UnityEnvironment(file_name="MyEnvironment", side_channels=[channel])

SMAC:

from smac.env import StarCraft2Env

env = StarCraft2Env(map_name="8m")
env.reset()

Key Differences

  • SMAC focuses specifically on StarCraft II environments, while ml-agents is more versatile
  • SMAC provides a simpler setup process for StarCraft II scenarios
  • ml-agents offers more flexibility in creating custom environments within Unity
  • SMAC is better suited for research in multi-agent reinforcement learning in RTS games
  • ml-agents has a wider range of pre-built environments and example projects

Both repositories serve different purposes and cater to distinct use cases in the field of reinforcement learning and multi-agent systems.

Check out the new game server:

Pros of Football

  • More realistic and complex environment, simulating real-world soccer dynamics
  • Supports both single-agent and multi-agent scenarios, offering greater flexibility
  • Provides a comprehensive API for custom scenario creation and modification

Cons of Football

  • Higher computational requirements due to the complex 3D environment
  • Steeper learning curve for researchers unfamiliar with soccer rules and strategies
  • Limited to soccer-specific scenarios, potentially reducing applicability to other domains

Code Comparison

SMAC example:

from smac.env import StarCraft2Env

env = StarCraft2Env(map_name="8m")
env.reset()
for _ in range(1000):
    actions = [env.action_space.sample() for _ in range(env.n_agents)]
    reward, done, _ = env.step(actions)

Football example:

import gfootball.env as football_env

env = football_env.create_environment(env_name='academy_empty_goal_close', stacked=False)
obs = env.reset()
for _ in range(1000):
    action = env.action_space.sample()
    obs, reward, done, info = env.step(action)

Both repositories offer multi-agent reinforcement learning environments, but SMAC focuses on StarCraft II scenarios, while Football provides a soccer simulation. SMAC is better suited for studying coordinated multi-agent tactics in a discrete action space, while Football offers a more continuous and dynamic environment for exploring complex decision-making in a sports context.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

Note SMACv2 is out! Check it out here.

Warning Please pay attention to the version of SC2 used for your experiments. Performance is not always comparable between versions. The results in the SMAC paper use SC2.4.6.2.69232 not SC2.4.10.

SMAC - StarCraft Multi-Agent Challenge

SMAC is WhiRL's environment for research in the field of cooperative multi-agent reinforcement learning (MARL) based on Blizzard's StarCraft II RTS game. SMAC makes use of Blizzard's StarCraft II Machine Learning API and DeepMind's PySC2 to provide a convenient interface for autonomous agents to interact with StarCraft II, getting observations and performing actions. Unlike the PySC2, SMAC concentrates on decentralised micromanagement scenarios, where each unit of the game is controlled by an individual RL agent.

Please refer to the accompanying paper and blogpost for the outline of our motivation for using SMAC as a testbed for MARL research and the initial experimental results.

About

Together with SMAC we also release PyMARL - our PyTorch framework for MARL research, which includes implementations of several state-of-the-art algorithms, such as QMIX and COMA.

Data from the runs used in the paper is included here. These runs are outdated based on recent changes in StarCraft II. If you ran your experiments using the current version of SMAC, you mustn't compare your results with the ones provided here.

Quick Start

Installing SMAC

You can install SMAC by using the following command:

pip install git+https://github.com/oxwhirl/smac.git

Alternatively, you can clone the SMAC repository and then install smac with its dependencies:

git clone https://github.com/oxwhirl/smac.git
pip install -e smac/

NOTE: If you want to extend SMAC, please install the package as follows:

git clone https://github.com/oxwhirl/smac.git
cd smac
pip install -e ".[dev]"
pre-commit install

You may also need to upgrade pip: pip install --upgrade pip for the install to work.

Installing StarCraft II

SMAC is based on the full game of StarCraft II (versions >= 3.16.1). To install the game, follow the commands bellow.

Linux

Please use the Blizzard's repository to download the Linux version of StarCraft II. By default, the game is expected to be in ~/StarCraftII/ directory. This can be changed by setting the environment variable SC2PATH.

MacOS/Windows

Please install StarCraft II from Battle.net. The free Starter Edition also works. PySC2 will find the latest binary should you use the default install location. Otherwise, similar to the Linux version, you would need to set the SC2PATH environment variable with the correct location of the game.

SMAC maps

SMAC is composed of many combat scenarios with pre-configured maps. Before SMAC can be used, these maps need to be downloaded into the Maps directory of StarCraft II.

Download the SMAC Maps and extract them to your $SC2PATH/Maps directory. If you installed SMAC via git, simply copy the SMAC_Maps directory from smac/env/starcraft2/maps/ into $SC2PATH/Maps directory.

List the maps

To see the list of SMAC maps, together with the number of ally and enemy units and episode limit, run:

python -m smac.bin.map_list 

Creating new maps

Users can extend SMAC by adding new maps/scenarios. To this end, one needs to:

  • Design a new map/scenario using StarCraft II Editor:
    • Please take a close look at the existing maps to understand the basics that we use (e.g. Triggers, Units, etc),
    • We make use of special RL units which never automatically start attacking the enemy. Here is the step-by-step guide on how to create new RL units based on existing SC2 units,
  • Add the map information in smac_maps.py,
  • The newly designed RL units have new ids which need to be handled in starcraft2.py. Specifically, for heterogenious maps containing more than one unit types, one needs to manually set the unit ids in the _init_ally_unit_types() function.

Testing SMAC

Please run the following command to make sure that smac and its maps are properly installed.

python -m smac.examples.random_agents

Saving and Watching StarCraft II Replays

Saving a replay

If you’ve using our PyMARL framework for multi-agent RL, here’s what needs to be done:

  1. Saving models: We run experiments on Linux servers with save_model = True (also save_model_interval is relevant) setting so that we have training checkpoints (parameters of neural networks) saved (click here for more details).
  2. Loading models: Learnt models can be loaded using the checkpoint_path parameter. If you run PyMARL on MacOS (or Windows) while also setting save_replay=True, this will save a .SC2Replay file for test_nepisode episodes on the test mode (no exploration) in the Replay directory of StarCraft II. (click here for more details).

If you want to save replays without using PyMARL, simply call the save_replay() function of SMAC's StarCraft2Env in your training/testing code. This will save a replay of all epsidoes since the launch of the StarCraft II client.

The easiest way to save and later watch a replay on Linux is to use Wine.

Watching a replay

You can watch the saved replay directly within the StarCraft II client on MacOS/Windows by clicking on the corresponding Replay file.

You can also watch saved replays by running:

python -m pysc2.bin.play --norender --replay <path-to-replay>

This works for any replay as long as the map can be found by the game.

For more information, please refer to PySC2 documentation.

Documentation

For the detailed description of the environment, read the SMAC documentation.

The initial results of our experiments using SMAC can be found in the accompanying paper.

Citing SMAC

If you use SMAC in your research, please cite the SMAC paper.

M. Samvelyan, T. Rashid, C. Schroeder de Witt, G. Farquhar, N. Nardelli, T.G.J. Rudner, C.-M. Hung, P.H.S. Torr, J. Foerster, S. Whiteson. The StarCraft Multi-Agent Challenge, CoRR abs/1902.04043, 2019.

In BibTeX format:

@article{samvelyan19smac,
  title = {{The} {StarCraft} {Multi}-{Agent} {Challenge}},
  author = {Mikayel Samvelyan and Tabish Rashid and Christian Schroeder de Witt and Gregory Farquhar and Nantas Nardelli and Tim G. J. Rudner and Chia-Man Hung and Philiph H. S. Torr and Jakob Foerster and Shimon Whiteson},
  journal = {CoRR},
  volume = {abs/1902.04043},
  year = {2019},
}

Code Examples

Below is a small code example which illustrates how SMAC can be used. Here, individual agents execute random policies after receiving the observations and global state from the environment.

If you want to try the state-of-the-art algorithms (such as QMIX and COMA) on SMAC, make use of PyMARL - our framework for MARL research.

from smac.env import StarCraft2Env
import numpy as np


def main():
    env = StarCraft2Env(map_name="8m")
    env_info = env.get_env_info()

    n_actions = env_info["n_actions"]
    n_agents = env_info["n_agents"]

    n_episodes = 10

    for e in range(n_episodes):
        env.reset()
        terminated = False
        episode_reward = 0

        while not terminated:
            obs = env.get_obs()
            state = env.get_state()
            # env.render()  # Uncomment for rendering

            actions = []
            for agent_id in range(n_agents):
                avail_actions = env.get_avail_agent_actions(agent_id)
                avail_actions_ind = np.nonzero(avail_actions)[0]
                action = np.random.choice(avail_actions_ind)
                actions.append(action)

            reward, terminated, _ = env.step(actions)
            episode_reward += reward

        print("Total reward in episode {} = {}".format(e, episode_reward))

    env.close()

RLlib Examples

You can also run SMAC environments in RLlib, which includes scalable algorithms such as PPO and IMPALA. Check out the example code here.

PettingZoo Example

Thanks to Rodrigo de Lazcano, SMAC now supports PettingZoo API and PyGame environment rendering. Check out the example code here.