Convert Figma logo to code with AI

google-deepmind logodeepmind-research

This repository contains implementations and illustrative code to accompany DeepMind publications

13,064
2,570
13,064
266

Top Related Projects

77,006

Models and examples built with TensorFlow

30,331

Facebook AI Research Sequence-to-Sequence Toolkit written in Python.

22,218

A set of examples around pytorch in Vision, Text, Reinforcement Learning, etc.

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.

15,725

OpenAI Baselines: high-quality implementations of reinforcement learning algorithms

17,500

Microsoft Cognitive Toolkit (CNTK), an open source deep-learning toolkit

Quick Overview

The google-deepmind/deepmind-research repository is a collection of DeepMind's research projects and implementations. It showcases various cutting-edge machine learning and artificial intelligence techniques, algorithms, and models developed by DeepMind researchers. The repository serves as a resource for the AI research community to explore and build upon DeepMind's work.

Pros

  • Provides access to state-of-the-art AI research implementations
  • Covers a wide range of AI topics, including reinforcement learning, computer vision, and natural language processing
  • Offers opportunities for researchers and developers to learn from and extend DeepMind's work
  • Regularly updated with new projects and improvements

Cons

  • Some projects may have limited documentation or support
  • Not all projects are maintained or updated regularly
  • The complexity of some implementations may be challenging for beginners
  • Some projects may require significant computational resources to run

Code Examples

As this repository contains multiple projects, each with its own codebase, it's not possible to provide specific code examples that apply to the entire repository. However, here are a few examples of how you might interact with some of the projects:

  1. Using the BigBiGAN model:
import tensorflow as tf
from bigbigan import BigBiGAN

model = BigBiGAN()
x = tf.random.normal([1, 128, 128, 3])
z = model.encode(x)
x_recon = model.generate(z)
  1. Running a Hanabi agent:
from hanabi_learning_environment import rl_env
from hanabi_agents import rainbow_agent

env = rl_env.make('Hanabi-Full', num_players=2)
agent = rainbow_agent.RainbowAgent(env.observation_space, env.action_space)

obs = env.reset()
while not done:
    action = agent.act(obs)
    obs, reward, done, _ = env.step(action)
  1. Using the Perceiver IO model:
import jax
from perceiver_io import perceiver_io

model = perceiver_io.PerceiverIO()
x = jax.random.normal(jax.random.PRNGKey(0), (1, 224, 224, 3))
output = model(x)

Getting Started

To get started with a specific project in the deepmind-research repository:

  1. Clone the repository:

    git clone https://github.com/google-deepmind/deepmind-research.git
    cd deepmind-research
    
  2. Navigate to the project directory of interest:

    cd project_name
    
  3. Install the required dependencies (usually listed in a requirements.txt file):

    pip install -r requirements.txt
    
  4. Follow the project-specific README or documentation for further instructions on running the code or experiments.

Note that each project may have different setup requirements and dependencies, so be sure to read the project-specific documentation carefully.

Competitor Comparisons

77,006

Models and examples built with TensorFlow

Pros of models

  • Broader scope, covering a wide range of machine learning applications
  • More extensive documentation and tutorials for beginners
  • Larger community and more frequent updates

Cons of models

  • Less focus on cutting-edge research compared to deepmind-research
  • May include more deprecated or outdated models

Code Comparison

models:

import tensorflow as tf
from official.nlp import bert
model = bert.BertModel(config=bert_config)
outputs = model(input_ids, attention_mask=input_mask)

deepmind-research:

import jax
import haiku as hk
from acme import specs
model = hk.nets.MLP([64, 64, 5])
action = model(observation)

The code snippets show different approaches:

  • models uses TensorFlow and focuses on pre-built models like BERT
  • deepmind-research uses JAX and Haiku, emphasizing flexibility for custom research implementations

Both repositories offer valuable resources for machine learning practitioners, with models providing a more accessible entry point for beginners and a wider range of applications, while deepmind-research focuses on cutting-edge research implementations.

30,331

Facebook AI Research Sequence-to-Sequence Toolkit written in Python.

Pros of fairseq

  • More focused on sequence-to-sequence learning and natural language processing tasks
  • Provides a comprehensive toolkit for training custom models and running inference
  • Actively maintained with frequent updates and contributions from the community

Cons of fairseq

  • Narrower scope compared to deepmind-research's diverse range of AI topics
  • May require more domain-specific knowledge to utilize effectively
  • Less emphasis on cutting-edge research papers and novel algorithms

Code Comparison

fairseq:

from fairseq.models.transformer import TransformerModel
en2de = TransformerModel.from_pretrained(
    '/path/to/checkpoints',
    checkpoint_file='checkpoint_best.pt',
    data_name_or_path='data-bin/wmt16_en_de_bpe32k'
)
en2de.translate('Hello world!')

deepmind-research:

import sonnet as snt
import tensorflow as tf

class MLP(snt.Module):
  def __init__(self, output_sizes):
    super().__init__()
    self.layers = [snt.Linear(size) for size in output_sizes]

  def __call__(self, x):
    for layer in self.layers[:-1]:
      x = tf.nn.relu(layer(x))
    return self.layers[-1](x)

The code snippets highlight the different focus areas of the repositories. fairseq provides high-level APIs for NLP tasks, while deepmind-research offers more general-purpose machine learning components.

22,218

A set of examples around pytorch in Vision, Text, Reinforcement Learning, etc.

Pros of examples

  • More beginner-friendly with straightforward implementations of common models and tasks
  • Wider range of examples covering various domains in machine learning
  • Better documentation and explanations for each example

Cons of examples

  • Less cutting-edge research implementations compared to deepmind-research
  • Fewer complex, state-of-the-art models and algorithms
  • Limited focus on advanced AI research areas

Code Comparison

examples:

import torch
import torch.nn as nn
import torch.optim as optim

model = nn.Linear(10, 1)
optimizer = optim.SGD(model.parameters(), lr=0.01)

deepmind-research:

import sonnet as snt
import tensorflow as tf

model = snt.Linear(output_size=1)
optimizer = snt.optimizers.SGD(learning_rate=0.01)

The examples repository uses PyTorch, while deepmind-research primarily uses TensorFlow and Sonnet (DeepMind's TensorFlow-based library). The examples code is more straightforward and accessible, while deepmind-research tends to use more advanced and custom implementations.

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.

Pros of transformers

  • Extensive library of pre-trained models for various NLP tasks
  • Well-documented and user-friendly API for easy implementation
  • Active community support and frequent updates

Cons of transformers

  • Focused primarily on NLP tasks, limiting its scope compared to deepmind-research
  • May have higher computational requirements for some models

Code Comparison

transformers:

from transformers import pipeline

classifier = pipeline("sentiment-analysis")
result = classifier("I love this product!")[0]
print(f"Label: {result['label']}, Score: {result['score']:.4f}")

deepmind-research:

import sonnet as snt
import tensorflow as tf

model = snt.Linear(output_size=10)
x = tf.random.normal([8, 5])
y = model(x)

The transformers example demonstrates its simplicity in using pre-trained models, while the deepmind-research example showcases its flexibility in building custom neural network architectures.

15,725

OpenAI Baselines: high-quality implementations of reinforcement learning algorithms

Pros of baselines

  • More focused on reinforcement learning algorithms and implementations
  • Better documentation and examples for getting started quickly
  • More active community contributions and updates

Cons of baselines

  • Narrower scope, primarily centered on RL algorithms
  • Less diverse range of research topics and applications
  • Smaller codebase with fewer cutting-edge research implementations

Code Comparison

baselines (DQN implementation):

def learn(env,
          network,
          seed=None,
          lr=5e-4,
          total_timesteps=100000,
          buffer_size=50000,
          exploration_fraction=0.1,
          exploration_final_eps=0.02,
          train_freq=1,
          batch_size=32,
          print_freq=100,
          checkpoint_freq=10000,
          checkpoint_path=None,
          learning_starts=1000,
          gamma=1.0,
          target_network_update_freq=500,
          prioritized_replay=False,
          prioritized_replay_alpha=0.6,
          prioritized_replay_beta0=0.4,
          prioritized_replay_beta_iters=None,
          prioritized_replay_eps=1e-6,
          param_noise=False,
          callback=None,
          load_path=None,
          **network_kwargs
            ):

deepmind-research (AlphaFold protein structure prediction):

def predict_structure(
    fasta_path: str,
    output_dir_base: str,
    data_pipeline: pipeline.DataPipeline,
    model_runners: Sequence[model.RunModel],
    amber_relaxer: relax.AmberRelaxation,
    random_seed: int,
    benchmark: bool = False,
    use_precomputed_msas: bool = False,
) -> None:
17,500

Microsoft Cognitive Toolkit (CNTK), an open source deep-learning toolkit

Pros of CNTK

  • More mature and production-ready framework
  • Better performance and scalability for large-scale deep learning tasks
  • Extensive documentation and community support

Cons of CNTK

  • Less active development and updates compared to deepmind-research
  • Narrower focus on neural networks, while deepmind-research covers a broader range of AI research topics

Code Comparison

CNTK example:

import cntk as C

x = C.input_variable(2)
y = C.layers.Dense(1)(x)
z = C.sigmoid(y)

model = C.train.Trainer(z, (y, z), C.sgd(z.parameters, 0.1))

deepmind-research example (using TensorFlow):

import tensorflow as tf

x = tf.placeholder(tf.float32, shape=[None, 2])
y = tf.layers.dense(x, 1)
z = tf.sigmoid(y)

model = tf.train.GradientDescentOptimizer(0.1).minimize(z)

Note that deepmind-research is not a single framework but a collection of research projects, so the code example is just a representation using TensorFlow, which is commonly used in their projects.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

DeepMind Research

This repository contains implementations and illustrative code to accompany DeepMind publications. Along with publishing papers to accompany research conducted at DeepMind, we release open-source environments, data sets, and code to enable the broader research community to engage with our work and build upon it, with the ultimate goal of accelerating scientific progress to benefit society. For example, you can build on our implementations of the Deep Q-Network or Differential Neural Computer, or experiment in the same environments we use for our research, such as DeepMind Lab or StarCraft II.

If you enjoy building tools, environments, software libraries, and other infrastructure of the kind listed below, you can view open positions to work in related areas on our careers page.

For a full list of our publications, please see https://deepmind.com/research/publications/

Projects

Disclaimer

This is not an official Google product.