Top Related Projects
OpenSpiel is a collection of environments and algorithms for research in general reinforcement learning and search/planning in games.
A toolkit for developing and comparing reinforcement learning algorithms.
An API standard for multi-agent reinforcement learning environments, with popular reference environments and related utilities
SMAC: The StarCraft Multi-Agent Challenge
An API standard for single-agent reinforcement learning environments, with popular reference environments and related utilities (formerly Gym)
The Unity Machine Learning Agents Toolkit (ML-Agents) is an open-source project that enables games and simulations to serve as environments for training intelligent agents using deep reinforcement learning and imitation learning.
Quick Overview
PySC2 is the Python component of the StarCraft II Learning Environment (SC2LE), developed by DeepMind in collaboration with Blizzard Entertainment. It provides an interface for RL agents to interact with StarCraft II, offering a challenging environment for testing and developing AI algorithms in a complex, real-time strategy game setting.
Pros
- Provides a standardized environment for AI research in complex, multi-agent scenarios
- Offers a wide range of observation and action spaces, allowing for diverse AI experiments
- Includes pre-built mini-games for focused learning on specific aspects of the game
- Integrates well with popular machine learning libraries like TensorFlow and PyTorch
Cons
- Requires a licensed copy of StarCraft II to use
- Can be computationally intensive, especially for large-scale experiments
- Learning curve may be steep for researchers not familiar with StarCraft II mechanics
- Limited to the specific game environment of StarCraft II, which may not generalize to all real-world scenarios
Code Examples
- Initializing the SC2 environment:
from pysc2.env import sc2_env
from pysc2.lib import features
env = sc2_env.SC2Env(
map_name="Simple64",
players=[sc2_env.Agent(sc2_env.Race.terran),
sc2_env.Bot(sc2_env.Race.random, sc2_env.Difficulty.very_easy)],
agent_interface_format=features.AgentInterfaceFormat(
feature_dimensions=features.Dimensions(screen=84, minimap=64),
use_feature_units=True),
step_mul=8,
game_steps_per_episode=0,
visualize=True)
- Taking a step in the environment:
obs = env.reset()
done = False
while not done:
action = agent.step(obs)
obs = env.step([action])[0]
done = obs.last()
- Accessing observation data:
screen = obs.observation.feature_screen
minimap = obs.observation.feature_minimap
available_actions = obs.observation.available_actions
Getting Started
- Install StarCraft II and add the game's directory to your PATH.
- Install PySC2:
pip install pysc2
- Run a simple agent:
from pysc2.agents import random_agent
from pysc2.env import run_loop
from pysc2.env import sc2_env
def main():
agent = random_agent.RandomAgent()
try:
with sc2_env.SC2Env(map_name="Simple64") as env:
run_loop.run_loop([agent], env, max_episodes=1)
except KeyboardInterrupt:
pass
if __name__ == "__main__":
main()
This will run a random agent on the Simple64 map for one episode.
Competitor Comparisons
OpenSpiel is a collection of environments and algorithms for research in general reinforcement learning and search/planning in games.
Pros of OpenSpiel
- Supports a wider variety of games and environments
- Offers a more flexible framework for reinforcement learning research
- Provides implementations of numerous algorithms and game-theoretic tools
Cons of OpenSpiel
- Less specialized for StarCraft II, which may require additional work for SC2-specific tasks
- Potentially steeper learning curve due to its broader scope
Code Comparison
PySC2:
from pysc2.env import sc2_env
from pysc2.lib import actions
env = sc2_env.SC2Env(map_name="Simple64")
obs = env.reset()
action = actions.FunctionCall(actions.FUNCTIONS.no_op.id, [])
OpenSpiel:
import pyspiel
game = pyspiel.load_game("tic_tac_toe")
state = game.new_initial_state()
legal_actions = state.legal_actions()
action = legal_actions[0]
Both repositories provide environments for reinforcement learning, but PySC2 is specifically tailored for StarCraft II, while OpenSpiel offers a more general framework for various games and environments.
A toolkit for developing and comparing reinforcement learning algorithms.
Pros of Gym
- Broader scope, supporting a wide variety of environments beyond just StarCraft II
- More extensive documentation and community support
- Easier to set up and use for beginners in reinforcement learning
Cons of Gym
- Less specialized for complex game environments like StarCraft II
- May require additional wrappers or modifications for advanced scenarios
- Potentially lower performance for specific game-related tasks
Code Comparison
PySC2:
from pysc2.env import sc2_env
from pysc2.lib import actions
env = sc2_env.SC2Env(map_name="Simple64")
obs = env.reset()
action = actions.FunctionCall(actions.FUNCTIONS.no_op.id, [])
Gym:
import gym
env = gym.make('CartPole-v1')
obs = env.reset()
action = env.action_space.sample()
Both libraries provide similar high-level APIs for interacting with environments, but PySC2 is tailored specifically for StarCraft II, while Gym offers a more generic interface for various reinforcement learning tasks. PySC2 provides more detailed observations and complex action spaces specific to StarCraft II, whereas Gym's environments are typically simpler and more diverse.
An API standard for multi-agent reinforcement learning environments, with popular reference environments and related utilities
Pros of PettingZoo
- Supports a wider variety of environments beyond just StarCraft II
- Provides a standardized API for multi-agent reinforcement learning
- Offers easier integration with popular RL libraries like Stable Baselines3
Cons of PettingZoo
- Less specialized for StarCraft II, potentially lacking some game-specific features
- May have a steeper learning curve for users specifically interested in StarCraft II
Code Comparison
PettingZoo example:
from pettingzoo.classic import rps_v2
env = rps_v2.env()
env.reset()
for agent in env.agent_iter():
observation, reward, done, info = env.last()
action = env.action_space(agent).sample()
env.step(action)
pysc2 example:
from pysc2.env import sc2_env
from pysc2.lib import actions
env = sc2_env.SC2Env(map_name="Simple64")
obs = env.reset()
action = actions.FunctionCall(actions.FUNCTIONS.no_op.id, [])
obs = env.step([action])
PettingZoo offers a more generalized approach for multi-agent environments, while pysc2 is tailored specifically for StarCraft II. The choice between them depends on the specific requirements of your project and whether you need StarCraft II-specific features or a more versatile multi-agent framework.
SMAC: The StarCraft Multi-Agent Challenge
Pros of SMAC
- Focused on multi-agent reinforcement learning scenarios
- Provides pre-configured maps and scenarios for easier experimentation
- Offers a simpler API for multi-agent control
Cons of SMAC
- Limited to specific multi-agent scenarios, less flexible than PySC2
- Smaller community and fewer resources compared to PySC2
- Less comprehensive documentation and examples
Code Comparison
SMAC example:
from smac.env import StarCraft2Env
env = StarCraft2Env(map_name="8m")
env.reset()
for _ in range(1000):
actions = [env.action_space.sample() for _ in range(env.n_agents)]
reward, done, _ = env.step(actions)
PySC2 example:
from pysc2.env import sc2_env
from pysc2.lib import actions
env = sc2_env.SC2Env(map_name="Simple64")
obs = env.reset()
action = actions.FunctionCall(actions.FUNCTIONS.no_op.id, [])
obs = env.step([action])
Both repositories provide environments for reinforcement learning in StarCraft II, but SMAC focuses on multi-agent scenarios while PySC2 offers a more comprehensive interface to the game. SMAC is easier to use for specific multi-agent tasks, while PySC2 provides more flexibility and a broader range of possibilities for AI research in StarCraft II.
An API standard for single-agent reinforcement learning environments, with popular reference environments and related utilities (formerly Gym)
Pros of Gymnasium
- Broader scope: Supports a wide range of environments beyond just StarCraft II
- Active development: More frequent updates and contributions from the community
- Standardized API: Consistent interface across different environments
Cons of Gymnasium
- Less specialized: May lack some specific features for StarCraft II research
- Learning curve: Broader scope may require more time to understand all capabilities
- Potential overhead: Supporting multiple environments might introduce unnecessary complexity for focused StarCraft II research
Code Comparison
PySC2:
from pysc2.env import sc2_env
from pysc2.lib import actions
env = sc2_env.SC2Env(map_name="Simple64")
obs = env.reset()
action = actions.FunctionCall(actions.FUNCTIONS.no_op.id, [])
Gymnasium:
import gymnasium as gym
env = gym.make("CartPole-v1")
observation, info = env.reset(seed=42)
action = env.action_space.sample()
observation, reward, terminated, truncated, info = env.step(action)
The Unity Machine Learning Agents Toolkit (ML-Agents) is an open-source project that enables games and simulations to serve as environments for training intelligent agents using deep reinforcement learning and imitation learning.
Pros of ml-agents
- Broader application: Can be used for various types of games and simulations in Unity
- More accessible: Easier to set up and use for developers familiar with Unity
- Active community: Larger user base and more frequent updates
Cons of ml-agents
- Less specialized: Not optimized specifically for complex strategy games like StarCraft II
- Performance: May have lower performance for large-scale simulations compared to PySC2
Code Comparison
ml-agents:
from mlagents_envs.environment import UnityEnvironment
from mlagents_envs.side_channel.engine_configuration_channel import EngineConfigurationChannel
channel = EngineConfigurationChannel()
env = UnityEnvironment(file_name="MyEnvironment", side_channels=[channel])
PySC2:
from pysc2.env import sc2_env
from pysc2.lib import actions
env = sc2_env.SC2Env(map_name="Simple64")
obs = env.reset()
action = actions.FunctionCall(actions.FUNCTIONS.no_op.id, [])
Both libraries provide environments for reinforcement learning, but ml-agents is more versatile for Unity-based projects, while PySC2 is specifically designed for StarCraft II research and development.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
PySC2 - StarCraft II Learning Environment
PySC2 is DeepMind's Python component of the StarCraft II Learning Environment (SC2LE). It exposes Blizzard Entertainment's StarCraft II Machine Learning API as a Python RL Environment. This is a collaboration between DeepMind and Blizzard to develop StarCraft II into a rich environment for RL research. PySC2 provides an interface for RL agents to interact with StarCraft 2, getting observations and sending actions.
We have published an accompanying blogpost and paper, which outlines our motivation for using StarCraft II for DeepRL research, and some initial research results using the environment.
About
Disclaimer: This is not an official Google product.
If you use the StarCraft II Machine Learning API and/or PySC2 in your research, please cite the StarCraft II Paper
You can reach us at pysc2@deepmind.com.
Quick Start Guide
Get PySC2
PyPI
The easiest way to get PySC2 is to use pip:
$ pip install pysc2
That will install the pysc2
package along with all the required dependencies.
virtualenv can help manage your
dependencies. You may also need to upgrade pip: pip install --upgrade pip
for the pysc2
install to work. If you're running on an older system you may
need to install libsdl
libraries for the pygame
dependency.
Pip will install a few of the binaries to your bin directory. pysc2_play
can
be used as a shortcut to python -m pysc2.bin.play
.
From Source
Alternatively you can install latest PySC2 codebase from git master branch:
$ pip install --upgrade https://github.com/deepmind/pysc2/archive/master.zip
or from a local clone of the git repo:
$ git clone https://github.com/deepmind/pysc2.git
$ pip install --upgrade pysc2/
Get StarCraft II
PySC2 depends on the full StarCraft II game and only works with versions that include the API, which is 3.16.1 and above.
Linux
Follow Blizzard's documentation to
get the linux version. By default, PySC2 expects the game to live in
~/StarCraftII/
. You can override this path by setting the SC2PATH
environment variable or creating your own run_config.
Windows/MacOS
Install of the game as normal from Battle.net. Even the
Starter Edition will work.
If you used the default install location PySC2 should find the latest binary.
If you changed the install location, you might need to set the SC2PATH
environment variable with the correct location.
PySC2 should work on MacOS and Windows systems running Python 3.8+, but has only been thoroughly tested on Linux. We welcome suggestions and patches for better compatibility with other systems.
Get the maps
PySC2 has many maps pre-configured, but they need to be downloaded into the SC2
Maps
directory before they can be played.
Download the ladder maps
and the mini games
and extract them to your StarCraftII/Maps/
directory.
Run an agent
You can run an agent to test the environment. The UI shows you the actions of the agent and is helpful for debugging and visualization purposes.
$ python -m pysc2.bin.agent --map Simple64
It runs a random agent by default, but you can specify others if you'd like, including your own.
$ python -m pysc2.bin.agent --map CollectMineralShards --agent pysc2.agents.scripted_agent.CollectMineralShards
You can also run two agents against each other.
$ python -m pysc2.bin.agent --map Simple64 --agent2 pysc2.agents.random_agent.RandomAgent
To specify the agent's race, the opponent's difficulty, and more, you can pass
additional flags. Run with --help
to see what you can change.
Play the game as a human
There is a human agent interface which is mainly used for debugging, but it can also be used to play the game. The UI is fairly simple and incomplete, but it's enough to understand the basics of the game. Also, it runs on Linux.
$ python -m pysc2.bin.play --map Simple64
In the UI, hit ?
for a list of the hotkeys. The most basic ones are: F4
to
quit, F5
to restart, F8
to save a replay, and Pgup
/Pgdn
to control the
speed of the game. Otherwise use the mouse for selection and keyboard for
commands listed on the left.
The left side is a basic rendering. The right side is the feature layers that the agent receives, with some coloring to make it more useful to us. You can enable or disable RGB or feature layer rendering and their resolutions with command-line flags.
Watch a replay
Running an agent and playing as a human save a replay by default. You can watch that replay by running:
$ python -m pysc2.bin.play --replay <path-to-replay>
This works for any replay as long as the map can be found by the game.
The same controls work as for playing the game, so F4
to exit, pgup
/pgdn
to control the speed, etc.
You can save a video of the replay with the --video
flag.
List the maps
Maps need to be configured before they're known to the environment. You can see the list of known maps by running:
$ python -m pysc2.bin.map_list
Run the tests
If you want to submit a pull request, please make sure the tests pass on both python 2 and 3.
$ python -m pysc2.bin.run_tests
Environment Details
For a full description of the specifics of how the environment is configured, the observations and action spaces work read the environment documentation.
Note that an alternative to this environment is now available which provides an enriched action and observation format using the C++ wrappers developed for AlphaStar. See the converter documentation for more information.
Mini-game maps
The mini-game map files referenced in the paper are stored under pysc2/maps/
but must be installed in $SC2PATH/Maps
. Make sure to follow the download
instructions above.
Maps are configured in the Python files in pysc2/maps/
. The configs can set
player and time limits, whether to use the game outcome or curriculum score, and
a handful of other things. For more information about the maps, and how to
configure your own, read the maps documentation.
Replays
A replay lets you review what happened during a game. You can see the actions and observations that each player made as they played.
Blizzard is releasing a large number of anonymized 1v1 replays played on the ladder. You can find instructions for how to get the replay files on their site. You can also review your own replays.
Replays can be played back to get the observations and actions made during that game. The observations are rendered at the resolution you request, so may differ from what the human actually saw. Similarly the actions specify a point, which could reflect a different pixel on the human's screen, so may not have an exact match in our observations, though they should be fairly similar.
Replays are version dependent, so a 3.16 replay will fail in a 3.16.1 or 3.17 binary.
You can visualize the replays with the full game, or with pysc2.bin.play
.
Alternatively you can run pysc2.bin.replay_actions
to process many replays
in parallel.
Top Related Projects
OpenSpiel is a collection of environments and algorithms for research in general reinforcement learning and search/planning in games.
A toolkit for developing and comparing reinforcement learning algorithms.
An API standard for multi-agent reinforcement learning environments, with popular reference environments and related utilities
SMAC: The StarCraft Multi-Agent Challenge
An API standard for single-agent reinforcement learning environments, with popular reference environments and related utilities (formerly Gym)
The Unity Machine Learning Agents Toolkit (ML-Agents) is an open-source project that enables games and simulations to serve as environments for training intelligent agents using deep reinforcement learning and imitation learning.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot