Convert Figma logo to code with AI

google-deepmind logolab

A customisable 3D platform for agent-based AI research

7,196
1,374
7,196
64

Top Related Projects

OpenSpiel is a collection of environments and algorithms for research in general reinforcement learning and search/planning in games.

35,868

A toolkit for developing and comparing reinforcement learning algorithms.

18,027

The Unity Machine Learning Agents Toolkit (ML-Agents) is an open-source project that enables games and simulations to serve as environments for training intelligent agents using deep reinforcement learning and imitation learning.

4,181

Project Malmo is a platform for Artificial Intelligence experimentation and research built on top of Minecraft. We aim to inspire a new generation of research into challenging new problems presented by this unique environment. --- For installation instructions, scroll down to *Getting Started* below, or visit the project page for more information:

Universe: a software platform for measuring and training an AI's general intelligence across the world's supply of games, websites and other applications.

An API standard for multi-agent reinforcement learning environments, with popular reference environments and related utilities

Quick Overview

DeepMind Lab is a 3D learning environment based on id Software's Quake III Arena via ioquake3 and other open source software. It is designed for research and development of general artificial intelligence and machine learning systems. DeepMind Lab provides a suite of challenging 3D navigation and puzzle-solving tasks for learning agents.

Pros

  • Highly customizable and extensible environment for AI research
  • Provides a variety of pre-built tasks and the ability to create custom ones
  • Supports both discrete and continuous control
  • Integrates well with popular machine learning frameworks

Cons

  • Steep learning curve for newcomers to 3D game environments
  • Resource-intensive, requiring significant computational power
  • Limited documentation for advanced features and customizations
  • Dependency on older game engine technology (Quake III Arena)

Code Examples

  1. Creating a basic DeepMind Lab environment:
import deepmind_lab

env = deepmind_lab.Lab('seekavoid_arena_01', ['RGB_INTERLEAVED'])
env.reset()
  1. Taking a step in the environment:
action = [0, 0, 0, 1, 0, 0, 0]  # Move forward
obs, reward, done, info = env.step(action)
  1. Rendering the environment:
import matplotlib.pyplot as plt

obs = env.observations()['RGB_INTERLEAVED']
plt.imshow(obs)
plt.show()

Getting Started

To get started with DeepMind Lab, follow these steps:

  1. Install dependencies:
sudo apt-get install build-essential python3-dev python3-numpy python3-pil python3-pip python3-setuptools
  1. Clone the repository:
git clone https://github.com/deepmind/lab.git
  1. Build DeepMind Lab:
cd lab
bazel build -c opt //python/tests:python_module_test
  1. Install the Python module:
pip3 install .
  1. Run a simple example:
import deepmind_lab
env = deepmind_lab.Lab('seekavoid_arena_01', ['RGB_INTERLEAVED'])
env.reset()
obs, reward, done, _ = env.step([0, 0, 0, 1, 0, 0, 0])
print(f"Observation shape: {obs['RGB_INTERLEAVED'].shape}")
print(f"Reward: {reward}")

This will create a basic environment, take a step, and print the observation shape and reward.

Competitor Comparisons

OpenSpiel is a collection of environments and algorithms for research in general reinforcement learning and search/planning in games.

Pros of Open Spiel

  • Focuses on game theory and multi-agent reinforcement learning
  • Supports a wider variety of games and environments
  • More accessible for researchers in game theory and AI

Cons of Open Spiel

  • Less emphasis on 3D environments and first-person perspectives
  • May require more setup for complex game scenarios
  • Limited support for custom visual environments

Code Comparison

Open Spiel example:

import pyspiel

game = pyspiel.load_game("tic_tac_toe")
state = game.new_initial_state()
while not state.is_terminal():
    legal_actions = state.legal_actions()
    action = legal_actions[0]
    state.apply_action(action)

Lab example:

local game = require 'dmlab.system.game'
local tensor = require 'dmlab.system.tensor'

function api:init()
  self.pos = tensor.DoubleTensor(3):fill(0)
end

function api:step(action)
  self.pos:add(action)
end

Open Spiel is more focused on abstract game representations and algorithms, while Lab provides a framework for 3D environments and first-person gameplay. Open Spiel offers a broader range of game types, making it more versatile for game theory research. Lab, on the other hand, excels in creating immersive 3D environments for reinforcement learning tasks.

35,868

A toolkit for developing and comparing reinforcement learning algorithms.

Pros of Gym

  • Wider variety of environments, including classic control, robotics, and Atari games
  • More active community and frequent updates
  • Easier to install and use, with fewer dependencies

Cons of Gym

  • Less realistic 3D environments compared to Lab
  • Limited customization options for existing environments
  • Fewer built-in tools for analyzing agent performance

Code Comparison

Lab environment setup:

import deepmind_lab
env = deepmind_lab.Lab('seekavoid_arena_01', ['RGB_INTERLEAVED'])
env.reset()

Gym environment setup:

import gym
env = gym.make('CartPole-v1')
env.reset()

Both Lab and Gym are popular reinforcement learning environments, but they cater to different needs. Lab focuses on complex 3D environments and is particularly suited for research in navigation and visual learning. Gym offers a broader range of simpler environments and is more beginner-friendly.

Lab provides more realistic scenarios and better tools for in-depth analysis, while Gym excels in ease of use and community support. The choice between them depends on the specific requirements of your reinforcement learning project.

18,027

The Unity Machine Learning Agents Toolkit (ML-Agents) is an open-source project that enables games and simulations to serve as environments for training intelligent agents using deep reinforcement learning and imitation learning.

Pros of ml-agents

  • Built on Unity, offering a more user-friendly and visually rich environment for creating simulations
  • Provides a wider range of pre-built environments and examples, making it easier for beginners to get started
  • Supports multi-agent scenarios out of the box, allowing for more complex simulations

Cons of ml-agents

  • May have a steeper learning curve for those unfamiliar with Unity
  • Less focused on pure research, potentially offering fewer advanced features for cutting-edge AI experiments
  • Performance might be slower compared to lab due to the overhead of the Unity engine

Code Comparison

ml-agents:

from mlagents_envs.environment import UnityEnvironment
env = UnityEnvironment(file_name="3DBall")
behavior_name = list(env.behavior_specs)[0]
decision_steps, terminal_steps = env.get_steps(behavior_name)

lab:

import deepmind_lab
env = deepmind_lab.Lab("seekavoid_arena_01", ["RGB_INTERLEAVED"])
env.reset()
obs = env.step([0, 0, 0, 1, 0, 0, 0])

Both repositories provide powerful tools for AI research and development, but they cater to different needs. ml-agents is more accessible and versatile, while lab is more focused on advanced AI research in 3D environments.

4,181

Project Malmo is a platform for Artificial Intelligence experimentation and research built on top of Minecraft. We aim to inspire a new generation of research into challenging new problems presented by this unique environment. --- For installation instructions, scroll down to *Getting Started* below, or visit the project page for more information:

Pros of Malmo

  • Built on Minecraft, providing a rich, familiar environment for AI research
  • Supports multiple programming languages (Python, C++, C#, Java)
  • Offers a more user-friendly setup process for beginners

Cons of Malmo

  • Less flexibility in customizing the environment compared to Lab
  • Slower performance due to running on top of Minecraft
  • Limited to Minecraft-style graphics and physics

Code Comparison

Lab (C++):

#include "deepmind_lab.h"
lab::DeepMindLabLaunchParams params;
params.renderer = lab::DeepMindLabRenderer_Software;
lab::DeepMindLabPtr lab = lab::CreateDeepMindLab("seekavoid_arena_01", "");

Malmo (Python):

import MalmoPython
agent_host = MalmoPython.AgentHost()
my_mission = MalmoPython.MissionSpec(mission_xml, True)
my_mission_record = MalmoPython.MissionRecordSpec()
agent_host.startMission(my_mission, my_mission_record)

Both repositories provide platforms for AI research in 3D environments. Lab offers more customization and performance but requires more setup, while Malmo provides an accessible Minecraft-based environment with multi-language support. The code examples show the initialization process for each platform, highlighting their different approaches to environment creation and agent interaction.

Universe: a software platform for measuring and training an AI's general intelligence across the world's supply of games, websites and other applications.

Pros of Universe

  • Broader range of environments, including real-world tasks and games
  • Easier integration with existing software and APIs
  • More flexible and extensible architecture

Cons of Universe

  • Less optimized for performance compared to Lab
  • Steeper learning curve due to its complexity
  • Less focused on specific research areas

Code Comparison

Lab example:

import deepmind_lab
env = deepmind_lab.Lab('seekavoid_arena_01', ['RGB_INTERLEAVED'])
env.reset()
obs, reward, done = env.step([0, 0, 0, 1, 0, 0, 0])

Universe example:

import gym
import universe
env = gym.make('flashgames.DuskDrive-v0')
env.configure(remotes=1)
observation_n = env.reset()
action_n = [[('KeyEvent', 'ArrowUp', True)] for _ in observation_n]
observation_n, reward_n, done_n, info = env.step(action_n)

Both repositories provide environments for reinforcement learning research, but they differ in scope and focus. Lab is more specialized for 3D navigation and puzzle-solving tasks, while Universe offers a wider variety of environments, including web browsers and desktop applications. Lab tends to be more performant and tailored for specific research questions, whereas Universe prioritizes flexibility and real-world applicability.

An API standard for multi-agent reinforcement learning environments, with popular reference environments and related utilities

Pros of PettingZoo

  • Broader scope: Supports multi-agent reinforcement learning environments
  • More accessible: Easier to install and use, with fewer dependencies
  • Active development: Regularly updated with new features and environments

Cons of PettingZoo

  • Less complex environments: Generally simpler than Lab's 3D environments
  • Limited to 2D: Lacks 3D rendering capabilities offered by Lab
  • Smaller community: Less extensive documentation and fewer third-party resources

Code Comparison

PettingZoo example:

from pettingzoo.butterfly import knights_archers_zombies_v10
env = knights_archers_zombies_v10.env()
env.reset()
for agent in env.agent_iter():
    observation, reward, done, info = env.last()
    action = env.action_space(agent).sample()
    env.step(action)

Lab example:

import deepmind_lab
env = deepmind_lab.Lab('seekavoid_arena_01', ['RGB_INTERLEAVED'])
env.reset()
obs = env.observations()['RGB_INTERLEAVED']
action = [0, 0, 0, 1, 0, 0, 0]  # Move forward
reward = env.step(action, num_steps=4)

Both libraries provide environments for reinforcement learning, but PettingZoo focuses on multi-agent scenarios and offers a more diverse range of 2D environments. Lab, on the other hand, specializes in complex 3D environments with a focus on first-person navigation and puzzle-solving tasks.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

DeepMind Lab

DeepMind Lab is a 3D learning environment based on id Software's Quake III Arena via ioquake3 and other open source software.

DeepMind Lab provides a suite of challenging 3D navigation and puzzle-solving tasks for learning agents. Its primary purpose is to act as a testbed for research in artificial intelligence, especially deep reinforcement learning.

About

Disclaimer: This is not an official Google product.

If you use DeepMind Lab in your research and would like to cite the DeepMind Lab environment, we suggest you cite the DeepMind Lab paper.

You can reach us at lab@deepmind.com.

Getting started on Linux

$ git clone https://github.com/deepmind/lab
$ cd lab

For a live example of a random agent, run

lab$ bazel run :python_random_agent --define graphics=sdl -- \
               --length=10000 --width=640 --height=480

Here is some more detailed build documentation, including how to install dependencies if you don't have them.

To enable compiler optimizations, pass the flag --compilation_mode=opt, or -c opt for short, to each bazel build, bazel test and bazel run command. The flag is omitted from the examples here for brevity, but it should be used for real training and evaluation where performance matters.

Play as a human

To test the game using human input controls, run

lab$ bazel run :game -- --level_script=tests/empty_room_test --level_setting=logToStdErr=true
# or:
lab$ bazel run :game -- -l tests/empty_room_test -s logToStdErr=true

Leave the logToStdErr setting off to disable most log output.

The values of observations that the environment exposes can be printed at every step by adding a flag --observation OBSERVATION_NAME for each observation of interest.

lab$ bazel run :game -- --level_script=lt_chasm --observation VEL.TRANS --observation VEL.ROT

Train an agent

DeepMind Lab ships with an example random agent in python/random_agent.py which can be used as a starting point for implementing a learning agent. To let this agent interact with DeepMind Lab for training, run

lab$ bazel run :python_random_agent

The Python API is used for agent-environment interactions. We also provide bindings to DeepMind's "dm_env" general API for reinforcement learning, as well as a way to build a self-contained PIP package; see the separate documentation for details.

DeepMind Lab ships with different levels implementing different tasks. These tasks can be configured using Lua scripts, as described in the Lua API.


Upstream sources

DeepMind Lab is built from the ioquake3 game engine, and it uses the tools q3map2 and bspc for map creation. Bug fixes and cleanups that originate with those projects are best fixed upstream and then merged into DeepMind Lab.

  • bspc is taken from github.com/TTimo/bspc, revision d9a372db3fb6163bc49ead41c76c801a3d14cf80. There are virtually no local modifications, although we integrate this code with the main ioq3 code and do not use their copy in the deps directory. We expect this code to be stable.

  • q3map2 is taken from github.com/TTimo/GtkRadiant, revision d3d00345c542c8d7cc74e2e8a577bdf76f79c701. A few minor local modifications add synchronization. We also expect this code to be stable.

  • ioquake3 is taken from github.com/ioquake/ioq3, revision 29db64070aa0bae49953bddbedbed5e317af48ba. The code contains extensive modifications and additions. We aim to merge upstream changes occasionally.

We are very grateful to the maintainers of these repositories for all their hard work on maintaining high-quality code bases.

External dependencies, prerequisites and porting notes

DeepMind Lab currently ships as source code only. It depends on a few external software libraries, which we ship in several different ways:

  • The zlib, glib, libxml2, jpeg and png libraries are referenced as external Bazel sources, and Bazel BUILD files are provided. The dependent code itself should be fairly portable, but the BUILD rules we ship are specific to Linux on x86. To build on a different platform you will most likely have to edit those BUILD files.

  • Message digest algorithms are included in this package (in //third_party/md), taken from the reference implementations of their respective RFCs. A "generic reinforcement learning API" is included in //third_party/rl_api, which has also been created by the DeepMind Lab authors. This code is portable.

  • EGL headers are included in this package (in //third_party/GL/{EGL,KHR}), taken from the Khronos OpenGL/OpenGL ES XML API Registry at www.khronos.org/registry/EGL. The headers have been modified slightly to remove the dependency of EGL on X.

  • Several additional libraries are required but are not shipped in any form; they must be present on your system:

    • SDL 2
    • gettext (required by glib)
    • OpenGL: A hardware driver and library are needed for hardware-accelerated human play. The headless library that machine learning agents will want to use can use either hardware-accelerated rendering via EGL or GLX or software rendering via OSMesa, depending on the --define headless=... build setting.
    • Python 2.7 (other versions might work, too) with NumPy, PIL (a few tests require a NumPy version of at least 1.8), or Python 3 (at least 3.5) with NumPy and Pillow.

The build rules are using a few compiler settings that are specific to GCC. If some flags are not recognized by your compiler (typically those would be specific warning suppressions), you may have to edit those flags. The warnings should be noisy but harmless.