Convert Figma logo to code with AI

google-research logofootball

Check out the new game server:

3,305
1,275
3,305
72

Top Related Projects

16,887

The Unity Machine Learning Agents Toolkit (ML-Agents) is an open-source project that enables games and simulations to serve as environments for training intelligent agents using deep reinforcement learning and imitation learning.

34,461

A toolkit for developing and comparing reinforcement learning algorithms.

OpenSpiel is a collection of environments and algorithms for research in general reinforcement learning and search/planning in games.

Universe: a software platform for measuring and training an AI's general intelligence across the world's supply of games, websites and other applications.

An API standard for multi-agent reinforcement learning environments, with popular reference environments and related utilities

16,254

Open source simulator for autonomous vehicles built on Unreal Engine / Unity, from Microsoft AI & Research

Quick Overview

Google Research Football is an open-source environment for reinforcement learning research. It provides a challenging scenario based on football (soccer) gameplay, allowing researchers to develop and test AI agents in a complex, multi-agent setting. The environment offers various modes and difficulty levels, making it suitable for a wide range of research applications.

Pros

  • Realistic and complex multi-agent environment for AI research
  • Customizable scenarios and difficulty levels
  • Supports both single-agent and multi-agent training
  • Integrates well with popular RL frameworks like OpenAI Gym

Cons

  • Steep learning curve for beginners in reinforcement learning
  • Computationally intensive, requiring significant resources for training
  • Limited documentation for advanced features and customizations
  • May not be directly applicable to real-world robotics or non-game scenarios

Code Examples

  1. Creating and running a simple environment:
import gfootball.env as football_env

env = football_env.create_environment(
    env_name="academy_empty_goal_close",
    stacked=False,
    logdir='/tmp/football',
    write_goal_dumps=False,
    write_full_episode_dumps=False,
    render=False
)

obs = env.reset()
done = False
while not done:
    action = env.action_space.sample()  # Random action
    obs, reward, done, info = env.step(action)
  1. Training a PPO agent using Stable Baselines3:
from stable_baselines3 import PPO
import gfootball.env as football_env

env = football_env.create_environment(env_name="academy_3_vs_1_with_keeper", render=True)
model = PPO("MlpPolicy", env, verbose=1)
model.learn(total_timesteps=1000000)
model.save("ppo_football")
  1. Evaluating a trained model:
from stable_baselines3 import PPO
import gfootball.env as football_env

env = football_env.create_environment(env_name="academy_3_vs_1_with_keeper", render=True)
model = PPO.load("ppo_football")

obs = env.reset()
done = False
while not done:
    action, _states = model.predict(obs, deterministic=True)
    obs, reward, done, info = env.step(action)
    env.render()

Getting Started

  1. Install the Google Research Football environment:

    pip install gfootball
    
  2. Create a simple environment and run a random agent:

    import gfootball.env as football_env
    
    env = football_env.create_environment(env_name="academy_empty_goal_close", render=True)
    obs = env.reset()
    done = False
    while not done:
        action = env.action_space.sample()
        obs, reward, done, info = env.step(action)
    
  3. For more advanced usage, refer to the project's GitHub repository and documentation.

Competitor Comparisons

16,887

The Unity Machine Learning Agents Toolkit (ML-Agents) is an open-source project that enables games and simulations to serve as environments for training intelligent agents using deep reinforcement learning and imitation learning.

Pros of ml-agents

  • Broader application: Supports a wide range of environments and game types
  • More extensive documentation and tutorials
  • Active community and regular updates

Cons of ml-agents

  • Steeper learning curve for beginners
  • Requires Unity engine knowledge
  • More complex setup process

Code Comparison

ml-agents:

from mlagents_envs.environment import UnityEnvironment
from mlagents_envs.side_channel.engine_configuration_channel import EngineConfigurationChannel

channel = EngineConfigurationChannel()
env = UnityEnvironment(file_name="MyEnvironment", side_channels=[channel])

football:

import gfootball.env as football_env

env = football_env.create_environment(
    env_name="11_vs_11_stochastic",
    representation="raw",
    render=True
)

The ml-agents code shows environment creation with a side channel for configuration, while the football code demonstrates a simpler environment setup focused on football scenarios. ml-agents offers more flexibility but requires more setup, whereas football provides a more straightforward approach for its specific domain.

34,461

A toolkit for developing and comparing reinforcement learning algorithms.

Pros of Gym

  • Broader scope with a wide variety of environments, not limited to football
  • More established and widely used in the reinforcement learning community
  • Extensive documentation and community support

Cons of Gym

  • Less specialized for football-specific tasks
  • May require more setup and customization for complex scenarios
  • Potentially less realistic physics simulation for sports-related tasks

Code Comparison

Football:

env = football_env.create_environment(
    env_name="academy_3_vs_1_with_keeper",
    representation="raw",
    render=True
)

Gym:

import gym
env = gym.make('CartPole-v1')
observation, info = env.reset(seed=42)

The Football environment is specifically designed for football-related tasks, while Gym provides a more general-purpose framework for reinforcement learning environments. Football offers more realistic football simulations, while Gym provides a wider range of environments for various tasks. The code examples show the difference in environment creation and initialization between the two libraries.

OpenSpiel is a collection of environments and algorithms for research in general reinforcement learning and search/planning in games.

Pros of Open Spiel

  • Broader scope: Covers a wide range of games and algorithms, not limited to football
  • More flexible: Allows for easy implementation of new games and algorithms
  • Better documentation: Comprehensive guides and examples for users and contributors

Cons of Open Spiel

  • Less specialized: May not provide as deep an experience for football-specific simulations
  • Higher complexity: Steeper learning curve due to its broader scope
  • Potentially slower: Generalized framework might sacrifice some performance for flexibility

Code Comparison

Football:

env = football_env.create_environment(
    env_name="academy_3_vs_1_with_keeper", representation="raw")
obs = env.reset()
action = env.action_space.sample()
obs, reward, done, info = env.step(action)

Open Spiel:

game = pyspiel.load_game("tic_tac_toe")
state = game.new_initial_state()
legal_actions = state.legal_actions()
state.apply_action(legal_actions[0])

Both repositories provide environments for reinforcement learning, but Football focuses on soccer simulations while Open Spiel offers a more diverse set of games and algorithms for research in game theory and AI.

Universe: a software platform for measuring and training an AI's general intelligence across the world's supply of games, websites and other applications.

Pros of Universe

  • Broader scope, supporting multiple environments beyond just football
  • Designed for general AI training across diverse tasks
  • Integrates with popular deep learning frameworks

Cons of Universe

  • Less focused, potentially more complex to set up for specific use cases
  • May require more computational resources due to its broader scope
  • Less active development and community support in recent years

Code Comparison

Football:

env = football_env.create_environment(
    env_name="academy_empty_goal_close",
    stacked=False,
    representation='raw',
    rewards='scoring,checkpoints')

Universe:

import gym
import universe

env = gym.make('flashgames.DuskDrive-v0')
observation_n = env.reset()

Key Differences

  • Football focuses specifically on soccer simulations, while Universe aims to provide a general platform for AI training across various environments
  • Football offers more detailed control over the soccer environment, while Universe provides a wider range of pre-built environments
  • Football has more recent updates and active development, whereas Universe has seen less activity in recent years

Use Cases

  • Football: Ideal for researchers and developers focused on soccer-specific AI and reinforcement learning
  • Universe: Better suited for those working on general AI capabilities across multiple domains and task types

An API standard for multi-agent reinforcement learning environments, with popular reference environments and related utilities

Pros of PettingZoo

  • Broader scope: Supports a wide variety of multi-agent environments beyond just football
  • More flexible: Allows for custom environment creation and modification
  • Active community: Regular updates and contributions from a diverse group of developers

Cons of PettingZoo

  • Less specialized: May not offer as deep a simulation for football specifically
  • Steeper learning curve: Due to its broader scope, it may take longer to get started with a specific use case

Code Comparison

PettingZoo example:

from pettingzoo.butterfly import knights_archers_zombies_v10
env = knights_archers_zombies_v10.env()
env.reset()
for agent in env.agent_iter():
    observation, reward, done, info = env.last()
    action = policy(observation)
    env.step(action)

Football example:

import gfootball.env as football_env
env = football_env.create_environment(env_name="11_vs_11_stochastic")
obs = env.reset()
action = [football_env.Action.Right]
obs, reward, done, info = env.step(action)

Both repositories provide environments for reinforcement learning, but PettingZoo offers a more diverse set of environments while Football focuses specifically on soccer simulations. The code structure differs, with PettingZoo using an agent iteration approach and Football using a more traditional step-based interaction.

16,254

Open source simulator for autonomous vehicles built on Unreal Engine / Unity, from Microsoft AI & Research

Pros of AirSim

  • More versatile simulation environment, supporting various vehicles (drones, cars) and scenarios
  • Highly realistic physics and graphics, leveraging Unreal Engine
  • Extensive API support for multiple programming languages

Cons of AirSim

  • Steeper learning curve due to complex setup and configuration
  • Higher system requirements for smooth operation
  • Less focused on a specific domain, potentially overwhelming for beginners

Code Comparison

AirSim (Python):

import airsim

client = airsim.MultirotorClient()
client.takeoffAsync().join()
client.moveToPositionAsync(0, 0, -10, 5).join()

Football (Python):

import gfootball.env as football_env

env = football_env.create_environment(env_name="11_vs_11_stochastic")
obs = env.reset()
action = env.action_space.sample()

Both repositories provide simulation environments for AI research, but they cater to different domains. Football focuses on creating a platform for reinforcement learning in the context of soccer, while AirSim offers a more general-purpose simulation environment for autonomous systems. Football is more accessible for researchers interested in team sports and multi-agent scenarios, while AirSim provides a highly realistic environment for robotics and computer vision research across various applications.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

Google Research Football

This repository contains an RL environment based on open-source game Gameplay Football.
It was created by the Google Brain team for research purposes.

Useful links:

We'd like to thank Bastiaan Konings Schuiling, who authored and open-sourced the original version of this game.

Quick Start

In colab

Open our example Colab, that will allow you to start training your model in less than 2 minutes.

This method doesn't support game rendering on screen - if you want to see the game running, please use the method below.

Using Docker

This is the recommended way for Linux-based systems to avoid incompatible package versions. Instructions are available here.

On your computer

1. Install required packages

Linux

sudo apt-get install git cmake build-essential libgl1-mesa-dev libsdl2-dev \
libsdl2-image-dev libsdl2-ttf-dev libsdl2-gfx-dev libboost-all-dev \
libdirectfb-dev libst-dev mesa-utils xvfb x11vnc python3-pip

python3 -m pip install --upgrade pip setuptools psutil wheel

macOS

First install brew. It should automatically install Command Line Tools. Next install required packages:

brew install git python3 cmake sdl2 sdl2_image sdl2_ttf sdl2_gfx boost boost-python3

python3 -m pip install --upgrade pip setuptools psutil wheel

Windows

Install Git and Python 3. Update pip in the Command Line (here and for the next steps type python instead of python3)

python -m pip install --upgrade pip setuptools psutil wheel

2. Install GFootball

Option a. From PyPi package (recommended)

python3 -m pip install gfootball

Option b. Installing from sources using GitHub repository

(On Windows you have to install additional tools and set an environment variable, see Compiling Engine for detailed instructions.)

git clone https://github.com/google-research/football.git
cd football

Optionally you can use virtual environment:

python3 -m venv football-env
source football-env/bin/activate

Next, build the game engine and install dependencies:

python3 -m pip install .

This command can run for a couple of minutes, as it compiles the C++ environment in the background. If you face any problems, first check Compiling Engine documentation and search GitHub issues.

3. Time to play!

python3 -m gfootball.play_game --action_set=full

Make sure to check out the keyboard mappings. To quit the game press Ctrl+C in the terminal.

Contents

Training agents to play GRF

Run training

In order to run TF training, you need to install additional dependencies

  • Update PIP, so that tensorflow 1.15 is available: python3 -m pip install --upgrade pip setuptools wheel
  • TensorFlow: python3 -m pip install tensorflow==1.15.* or python3 -m pip install tensorflow-gpu==1.15.*, depending on whether you want CPU or GPU version;
  • Sonnet and psutil: python3 -m pip install dm-sonnet==1.* psutil;
  • OpenAI Baselines: python3 -m pip install git+https://github.com/openai/baselines.git@master.

Then:

  • To run example PPO experiment on academy_empty_goal scenario, run python3 -m gfootball.examples.run_ppo2 --level=academy_empty_goal_close
  • To run on academy_pass_and_shoot_with_keeper scenario, run python3 -m gfootball.examples.run_ppo2 --level=academy_pass_and_shoot_with_keeper

In order to train with nice replays being saved, run python3 -m gfootball.examples.run_ppo2 --dump_full_episodes=True --render=True

In order to reproduce PPO results from the paper, please refer to:

  • gfootball/examples/repro_checkpoint_easy.sh
  • gfootball/examples/repro_scoring_easy.sh

Playing the game

Please note that playing the game is implemented through an environment, so human-controlled players use the same interface as the agents. One important implication is that there is a single action per 100 ms reported to the environment, which might cause a lag effect when playing.

Keyboard mappings

The game defines following keyboard mapping (for the keyboard player type):

  • ARROW UP - run to the top.
  • ARROW DOWN - run to the bottom.
  • ARROW LEFT - run to the left.
  • ARROW RIGHT - run to the right.
  • S - short pass in the attack mode, pressure in the defense mode.
  • A - high pass in the attack mode, sliding in the defense mode.
  • D - shot in the attack mode, team pressure in the defense mode.
  • W - long pass in the attack mode, goalkeeper pressure in the defense mode.
  • Q - switch the active player in the defense mode.
  • C - dribble in the attack mode.
  • E - sprint.

Play vs built-in AI

Run python3 -m gfootball.play_game --action_set=full. By default, it starts the base scenario and the left player is controlled by the keyboard. Different types of players are supported (gamepad, external bots, agents...). For possible options run python3 -m gfootball.play_game -helpfull.

Play vs pre-trained agent

In particular, one can play against agent trained with run_ppo2 script with the following command (notice no action_set flag, as PPO agent uses default action set): python3 -m gfootball.play_game --players "keyboard:left_players=1;ppo2_cnn:right_players=1,checkpoint=$YOUR_PATH"

Trained checkpoints

We provide trained PPO checkpoints for the following scenarios:

In order to see the checkpoints playing, run python3 -m gfootball.play_game --players "ppo2_cnn:left_players=1,policy=gfootball_impala_cnn,checkpoint=$CHECKPOINT" --level=$LEVEL, where $CHECKPOINT is the path to downloaded checkpoint. Please note that the checkpoints were trained with Tensorflow 1.15 version. Using different Tensorflow version may result in errors. The easiest way to run these checkpoints is through provided Dockerfile_examples image. See running in docker for details (just override the default Docker definition with -f Dockerfile_examples parameter).

In order to train against a checkpoint, you can pass 'extra_players' argument to create_environment function. For example extra_players='ppo2_cnn:right_players=1,policy=gfootball_impala_cnn,checkpoint=$CHECKPOINT'.