Convert Figma logo to code with AI

Unity-Technologies logoml-agents

The Unity Machine Learning Agents Toolkit (ML-Agents) is an open-source project that enables games and simulations to serve as environments for training intelligent agents using deep reinforcement learning and imitation learning.

16,887
4,114
16,887
38

Top Related Projects

34,461

A toolkit for developing and comparing reinforcement learning algorithms.

OpenSpiel is a collection of environments and algorithms for research in general reinforcement learning and search/planning in games.

Check out the new game server:

16,254

Open source simulator for autonomous vehicles built on Unreal Engine / Unity, from Microsoft AI & Research

Google DeepMind's software stack for physics-based simulation and Reinforcement Learning environments, using MuJoCo.

Universe: a software platform for measuring and training an AI's general intelligence across the world's supply of games, websites and other applications.

Quick Overview

Unity ML-Agents is an open-source project that enables games and simulations created with the Unity Editor to serve as environments for training intelligent agents. It provides implementations of state-of-the-art algorithms to enable game developers and hobbyists to easily train intelligent agents for 2D, 3D, and VR/AR games.

Pros

  • Seamless integration with Unity, allowing developers to create complex training environments
  • Supports a wide range of learning algorithms, including PPO, SAC, and imitation learning
  • Provides a Python API for training agents, making it accessible to machine learning researchers
  • Includes a large number of example environments and pre-trained models

Cons

  • Steep learning curve for those new to both Unity and machine learning
  • Performance can be an issue for complex environments or large-scale training
  • Documentation can be overwhelming and sometimes outdated
  • Limited support for certain advanced reinforcement learning techniques

Code Examples

  1. Creating a simple agent behavior:
using Unity.MLAgents;
using Unity.MLAgents.Sensors;
using Unity.MLAgents.Actuators;

public class SimpleAgent : Agent
{
    public override void OnEpisodeBegin()
    {
        // Reset the agent's state at the beginning of each episode
    }

    public override void CollectObservations(VectorSensor sensor)
    {
        // Collect observations from the environment
    }

    public override void OnActionReceived(ActionBuffers actions)
    {
        // Perform actions based on the received action buffers
    }

    public override void Heuristic(in ActionBuffers actionsOut)
    {
        // Define heuristic behavior for manual control
    }
}
  1. Configuring a training behavior:
behaviors:
  SimpleAgent:
    trainer_type: ppo
    hyperparameters:
      batch_size: 64
      buffer_size: 12000
      learning_rate: 3.0e-4
      beta: 5.0e-3
      epsilon: 0.2
      lambd: 0.95
      num_epoch: 3
      learning_rate_schedule: linear
    network_settings:
      normalize: false
      hidden_units: 128
      num_layers: 2
    reward_signals:
      extrinsic:
        gamma: 0.99
        strength: 1.0
    max_steps: 500000
    time_horizon: 64
    summary_freq: 12000
  1. Training an agent using the Python API:
from mlagents_envs.environment import UnityEnvironment
from mlagents_envs.side_channel.engine_configuration_channel import EngineConfigurationChannel

channel = EngineConfigurationChannel()
env = UnityEnvironment(file_name="MyEnvironment", side_channels=[channel])
channel.set_configuration_parameters(time_scale=20)

env.reset()
behavior_name = list(env.behavior_specs)[0]
spec = env.behavior_specs[behavior_name]

decision_steps, terminal_steps = env.get_steps(behavior_name)
env.set_actions(behavior_name, spec.action_spec.random_action(len(decision_steps)))
env.step()

Getting Started

  1. Install Unity (2020.3 LTS or later) and ML-Agents package
  2. Create a new Unity project and import the ML-Agents package
  3. Create a new scene and add an agent GameObject
  4. Attach a script inheriting from Agent to the GameObject
  5. Implement the required methods: OnEpisodeBegin, CollectObservations, OnActionReceived
  6. Set up the training environment and configure the training parameters
  7. Use the Python API to start the training process

For detailed instructions, refer to the ML-Agents documentation.

Competitor Comparisons

34,461

A toolkit for developing and comparing reinforcement learning algorithms.

Pros of Gym

  • Broader ecosystem support and wider adoption in the RL community
  • Simpler setup and easier to get started for beginners
  • Supports a variety of environments beyond just game-like simulations

Cons of Gym

  • Limited built-in visualization tools
  • Less integrated with game development workflows
  • Fewer built-in algorithms and training features

Code Comparison

ml-agents:

from mlagents_envs.environment import UnityEnvironment
env = UnityEnvironment(file_name="MyUnityEnv")
behavior_name = list(env.behavior_specs)[0]
decision_steps, terminal_steps = env.get_steps(behavior_name)

Gym:

import gym
env = gym.make('CartPole-v1')
observation = env.reset()
action = env.action_space.sample()
observation, reward, done, info = env.step(action)

Summary

ml-agents is tailored for Unity-based environments and game development, offering robust integration with the Unity engine and better visualization tools. It provides more built-in algorithms and training features but has a steeper learning curve.

Gym, on the other hand, is more versatile and widely adopted in the RL community. It's easier to set up and use for beginners but lacks some of the advanced features and game development integration that ml-agents offers.

The choice between the two depends on the specific needs of the project, with ml-agents being more suitable for Unity-based games and simulations, while Gym is better for general-purpose RL research and experimentation.

OpenSpiel is a collection of environments and algorithms for research in general reinforcement learning and search/planning in games.

Pros of open_spiel

  • Broader scope: Focuses on general game theory and reinforcement learning across various game types
  • More extensive algorithm implementations: Includes a wide range of RL and game theory algorithms
  • Language flexibility: Supports multiple programming languages (C++, Python, Julia)

Cons of open_spiel

  • Steeper learning curve: Requires more domain knowledge in game theory and RL
  • Less integration with game engines: Primarily focused on abstract game representations
  • Smaller community: Less widespread adoption compared to ml-agents

Code Comparison

ml-agents (C#):

public override void OnActionReceived(ActionBuffers actionBuffers)
{
    MoveAgent(actionBuffers.DiscreteActions);
}

open_spiel (Python):

def step(self, action):
    self._state.apply_action(action)
    return self._state.observation_tensor(), self._state.rewards(), self._state.is_terminal(), {}

Both examples show how actions are processed in each framework, with ml-agents using a more Unity-specific approach and open_spiel following a more general RL paradigm.

Check out the new game server:

Pros of Football

  • Focused on a specific domain (soccer), allowing for more specialized and realistic simulations
  • Provides a comprehensive soccer environment with advanced physics and game rules
  • Supports multi-agent learning scenarios, ideal for team-based strategies

Cons of Football

  • Limited to soccer-specific applications, less versatile than ml-agents
  • May require more domain knowledge to effectively utilize and customize
  • Potentially steeper learning curve for users unfamiliar with soccer mechanics

Code Comparison

ml-agents:

from mlagents_envs.environment import UnityEnvironment
env = UnityEnvironment(file_name="MyUnityEnv")
behavior_name = list(env.behavior_specs)[0]
decision_steps, terminal_steps = env.get_steps(behavior_name)

Football:

from gfootball.env import create_environment
env = create_environment(env_name="11_vs_11_stochastic")
observation = env.reset()
action = env.action_space.sample()
observation, reward, done, info = env.step(action)

Both repositories provide Python APIs for interacting with their respective environments. ml-agents offers a more generic approach suitable for various Unity-based simulations, while Football provides a soccer-specific interface with built-in actions and observations tailored to the sport.

16,254

Open source simulator for autonomous vehicles built on Unreal Engine / Unity, from Microsoft AI & Research

Pros of AirSim

  • Provides a more realistic simulation environment, especially for drones and autonomous vehicles
  • Offers physics-based sensor models for cameras, LiDAR, and IMU
  • Supports multiple programming languages (C++, Python, C#) for greater flexibility

Cons of AirSim

  • Steeper learning curve due to its complexity and reliance on Unreal Engine
  • Less focused on general-purpose machine learning tasks compared to ml-agents
  • Requires more computational resources for high-fidelity simulations

Code Comparison

ml-agents (Python):

from mlagents_envs.environment import UnityEnvironment
env = UnityEnvironment(file_name="MyUnityEnv")
behavior_name = list(env.behavior_specs)[0]
decision_steps, _ = env.get_steps(behavior_name)

AirSim (Python):

import airsim
client = airsim.MultirotorClient()
client.confirmConnection()
client.enableApiControl(True)
client.armDisarm(True)
client.takeoffAsync().join()

Both repositories provide powerful simulation environments for AI research, but they cater to different use cases. ml-agents is more accessible and versatile for general machine learning tasks, while AirSim excels in realistic simulations for specific domains like robotics and autonomous vehicles.

Google DeepMind's software stack for physics-based simulation and Reinforcement Learning environments, using MuJoCo.

Pros of dm_control

  • More focused on physics-based control tasks and robotics simulations
  • Offers a wider range of pre-built environments and tasks
  • Integrates seamlessly with DeepMind's machine learning libraries

Cons of dm_control

  • Steeper learning curve for beginners in reinforcement learning
  • Less extensive documentation and community support compared to ml-agents
  • Limited to Python, while ml-agents supports multiple programming languages

Code Comparison

dm_control:

from dm_control import suite
env = suite.load(domain_name="cartpole", task_name="swingup")
time_step = env.reset()
action = env.action_spec().generate_value()
next_time_step = env.step(action)

ml-agents:

from mlagents_envs.environment import UnityEnvironment
env = UnityEnvironment(file_name="MyUnityEnv")
env.reset()
behavior_name = list(env.behavior_specs)[0]
decision_steps, terminal_steps = env.get_steps(behavior_name)
action = np.random.randn(len(decision_steps), 2)
env.set_actions(behavior_name, action)

Both repositories provide powerful tools for reinforcement learning in simulated environments. dm_control excels in physics-based control tasks and robotics simulations, while ml-agents offers a more accessible platform for game developers and a wider range of applications beyond robotics.

Universe: a software platform for measuring and training an AI's general intelligence across the world's supply of games, websites and other applications.

Pros of Universe

  • Supports a wide range of environments, including real-world applications and games
  • Provides a standardized API for interacting with various environments
  • Allows for training agents on complex, high-dimensional tasks

Cons of Universe

  • Less active development and community support compared to ml-agents
  • May require more setup and configuration for specific environments
  • Limited built-in training algorithms and tools

Code Comparison

ml-agents:

from mlagents_envs.environment import UnityEnvironment
env = UnityEnvironment(file_name="MyUnityGame")
behavior_name = list(env.behavior_specs)[0]
decision_steps, terminal_steps = env.get_steps(behavior_name)

Universe:

import gym
import universe
env = gym.make('flashgames.DuskDrive-v0')
observation_n = env.reset()
action_n = [[('KeyEvent', 'ArrowUp', True)] for _ in observation_n]

Summary

ml-agents focuses on Unity-based environments and provides a more integrated solution for game developers. Universe offers a broader range of environments but may require more setup. ml-agents has more active development and better documentation, while Universe provides flexibility for diverse tasks.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

Unity ML-Agents Toolkit

docs badge

license badge

(latest release) (all releases)

The Unity Machine Learning Agents Toolkit (ML-Agents) is an open-source project that enables games and simulations to serve as environments for training intelligent agents. We provide implementations (based on PyTorch) of state-of-the-art algorithms to enable game developers and hobbyists to easily train intelligent agents for 2D, 3D and VR/AR games. Researchers can also use the provided simple-to-use Python API to train Agents using reinforcement learning, imitation learning, neuroevolution, or any other methods. These trained agents can be used for multiple purposes, including controlling NPC behavior (in a variety of settings such as multi-agent and adversarial), automated testing of game builds and evaluating different game design decisions pre-release. The ML-Agents Toolkit is mutually beneficial for both game developers and AI researchers as it provides a central platform where advances in AI can be evaluated on Unity’s rich environments and then made accessible to the wider research and game developer communities.

Features

  • 17+ example Unity environments
  • Support for multiple environment configurations and training scenarios
  • Flexible Unity SDK that can be integrated into your game or custom Unity scene
  • Support for training single-agent, multi-agent cooperative, and multi-agent competitive scenarios via several Deep Reinforcement Learning algorithms (PPO, SAC, MA-POCA, self-play).
  • Support for learning from demonstrations through two Imitation Learning algorithms (BC and GAIL).
  • Quickly and easily add your own custom training algorithm and/or components.
  • Easily definable Curriculum Learning scenarios for complex tasks
  • Train robust agents using environment randomization
  • Flexible agent control with On Demand Decision Making
  • Train using multiple concurrent Unity environment instances
  • Utilizes the Sentis to provide native cross-platform support
  • Unity environment control from Python
  • Wrap Unity learning environments as a gym environment
  • Wrap Unity learning environments as a PettingZoo environment

See our ML-Agents Overview page for detailed descriptions of all these features. Or go straight to our web docs.

Releases & Documentation

Our latest, stable release is Release 21. Click here to get started with the latest release of ML-Agents.

You can also check out our new web docs!

The table below lists all our releases, including our main branch which is under active development and may be unstable. A few helpful guidelines:

  • The Versioning page overviews how we manage our GitHub releases and the versioning process for each of the ML-Agents components.
  • The Releases page contains details of the changes between releases.
  • The Migration page contains details on how to upgrade from earlier releases of the ML-Agents Toolkit.
  • The Documentation links in the table below include installation and usage instructions specific to each release. Remember to always use the documentation that corresponds to the release version you're using.
  • The com.unity.ml-agents package is verified for Unity 2020.1 and later. Verified packages releases are numbered 1.0.x.
VersionRelease DateSourceDocumentationDownloadPython PackageUnity Package
develop (unstable)--sourcedocsdownload----
Release 21October 9, 2023sourcedocsdownload1.0.03.0.0

If you are a researcher interested in a discussion of Unity as an AI platform, see a pre-print of our reference paper on Unity and the ML-Agents Toolkit.

If you use Unity or the ML-Agents Toolkit to conduct research, we ask that you cite the following paper as a reference:

@article{juliani2020,
  title={Unity: A general platform for intelligent agents},
  author={Juliani, Arthur and Berges, Vincent-Pierre and Teng, Ervin and Cohen, Andrew and Harper, Jonathan and Elion, Chris and Goy, Chris and Gao, Yuan and Henry, Hunter and Mattar, Marwan and Lange, Danny},
  journal={arXiv preprint arXiv:1809.02627},
  url={https://arxiv.org/pdf/1809.02627.pdf},
  year={2020}
}

Additionally, if you use the MA-POCA trainer in your research, we ask that you cite the following paper as a reference:

@article{cohen2022,
  title={On the Use and Misuse of Absorbing States in Multi-agent Reinforcement Learning},
  author={Cohen, Andrew and Teng, Ervin and Berges, Vincent-Pierre and Dong, Ruo-Ping and Henry, Hunter and Mattar, Marwan and Zook, Alexander and Ganguly, Sujoy},
  journal={RL in Games Workshop AAAI 2022},
  url={http://aaai-rlg.mlanctot.info/papers/AAAI22-RLG_paper_32.pdf},
  year={2022}
}

Additional Resources

We have a Unity Learn course, ML-Agents: Hummingbirds, that provides a gentle introduction to Unity and the ML-Agents Toolkit.

We've also partnered with CodeMonkeyUnity to create a series of tutorial videos on how to implement and use the ML-Agents Toolkit.

We have also published a series of blog posts that are relevant for ML-Agents:

More from Unity

Community and Feedback

The ML-Agents Toolkit is an open-source project and we encourage and welcome contributions. If you wish to contribute, be sure to review our contribution guidelines and code of conduct.

For problems with the installation and setup of the ML-Agents Toolkit, or discussions about how to best setup or train your agents, please create a new thread on the Unity ML-Agents forum and make sure to include as much detail as possible. If you run into any other problems using the ML-Agents Toolkit or have a specific feature request, please submit a GitHub issue.

Please tell us which samples you would like to see shipped with the ML-Agents Unity package by replying to this forum thread.

Your opinion matters a great deal to us. Only by hearing your thoughts on the Unity ML-Agents Toolkit can we continue to improve and grow. Please take a few minutes to let us know about it.

For any other questions or feedback, connect directly with the ML-Agents team at ml-agents@unity3d.com.

Privacy

In order to improve the developer experience for Unity ML-Agents Toolkit, we have added in-editor analytics. Please refer to "Information that is passively collected by Unity" in the Unity Privacy Policy.