Convert Figma logo to code with AI

google-research logogoogle-research

Google Research

34,067
7,859
34,067
1,479

Top Related Projects

77,006

Models and examples built with TensorFlow

30,331

Facebook AI Research Sequence-to-Sequence Toolkit written in Python.

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.

34,658

DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.

82,049

Tensors and Dynamic neural networks in Python with strong GPU acceleration

34,643

A toolkit for developing and comparing reinforcement learning algorithms.

Quick Overview

The Google Research repository is a collection of research projects and experiments conducted by the Google AI research team. It covers a wide range of topics, including machine learning, natural language processing, computer vision, and more. The repository serves as a platform for sharing research code, models, and datasets with the broader research community.

Pros

  • Diverse Research Topics: The repository covers a wide range of research areas, providing a comprehensive overview of Google's AI research efforts.
  • Open-Source Contributions: The repository encourages open-source contributions, allowing the community to collaborate and build upon the research.
  • High-Quality Code and Documentation: The code and documentation in the repository are generally well-maintained and of high quality, making it easier for researchers and developers to understand and use the projects.
  • Cutting-Edge Techniques: The projects in the repository often showcase the latest advancements in AI and machine learning, allowing researchers to stay up-to-date with the latest developments.

Cons

  • Lack of Unified Structure: The repository contains a large number of individual projects, which can make it challenging to navigate and find specific resources.
  • Varying Levels of Maintenance: While many projects are actively maintained, some may have less frequent updates or may be abandoned, which can impact their long-term usability.
  • Limited Guidance for Beginners: The repository may not always provide comprehensive getting-started guides or tutorials, which can make it difficult for newcomers to get started with the projects.
  • Potential Bias Towards Google's Interests: As the repository is maintained by Google, it may reflect the company's research priorities and interests, which may not always align with the broader research community's needs.

Code Examples

Here are a few code examples from the Google Research repository:

  1. Transformer-based Language Model:
import tensorflow as tf
from google_research.bert import modeling

# Load the pre-trained BERT model
bert_config = modeling.BertConfig.from_json_file('path/to/bert_config.json')
model = modeling.BertModel(config=bert_config, is_training=False)

# Perform inference on input text
input_ids = tf.constant([[101, 7592, 2110, 2001, 102]])
output = model(input_ids)

This code demonstrates how to load and use a pre-trained BERT model for language understanding tasks.

  1. Reinforcement Learning Agent:
import tensorflow as tf
from google_research.dopamine import agents

# Create a DQN agent
agent = agents.dqn_agent.DQNAgent(
    num_actions=4,
    observation_shape=(84, 84, 4),
    observation_dtype=tf.uint8,
    stack_size=4)

# Train the agent on a game environment
env = gym.make('Breakout-v0')
agent.train(env)

This code demonstrates how to create and train a Deep Q-Network (DQN) agent using the Dopamine reinforcement learning framework.

  1. Image Classification with EfficientNet:
import tensorflow as tf
from google_research.efficientnet import efficientnet_model

# Load the pre-trained EfficientNet model
model = efficientnet_model.EfficientNetModel(
    model_name='efficientnet-b0',
    include_top=True,
    weights='imagenet')

# Perform image classification
image = tf.keras.preprocessing.image.load_img('path/to/image.jpg', target_size=(224, 224))
x = tf.keras.preprocessing.image.img_to_array(image)
x = tf.expand_dims(x, axis=0)
predictions = model.predict(x)

This code demonstrates how to use the pre-trained EfficientNet model for image classification tasks.

Getting Started

To get started with the Google Research repository, you can follow these steps:

  1. Clone the repository:
git clone https://github.com/google-research/google-research.git
  1. Navigate to the repository directory:
cd google-research
  1. Explore the available projects and their respective README files to understand the purpose, usage, and setup instructions for each project.

  2. Install any required dependencies for the project you

Competitor Comparisons

77,006

Models and examples built with TensorFlow

Pros of models

  • More focused on TensorFlow-specific implementations and models
  • Better organized structure with clear categorization of models
  • More extensive documentation and tutorials for each model

Cons of models

  • Limited to TensorFlow framework, less diverse in terms of research areas
  • May not include cutting-edge research as quickly as google-research

Code Comparison

models:

import tensorflow as tf

model = tf.keras.Sequential([
  tf.keras.layers.Dense(64, activation='relu'),
  tf.keras.layers.Dense(10, activation='softmax')
])

google-research:

import jax.numpy as jnp
from flax import linen as nn

class MLP(nn.Module):
  @nn.compact
  def __call__(self, x):
    x = nn.Dense(64)(x)
    x = nn.relu(x)
    return nn.Dense(10)(x)

The code snippets show that models focuses on TensorFlow-specific implementations, while google-research explores various frameworks and approaches, including JAX and Flax in this example.

google-research offers a broader range of research topics and experimental implementations across multiple domains, while models provides a more curated collection of TensorFlow models with better documentation and organization. The choice between the two depends on whether you need established TensorFlow models or want to explore diverse cutting-edge research implementations.

30,331

Facebook AI Research Sequence-to-Sequence Toolkit written in Python.

Pros of fairseq

  • Focused on sequence modeling and neural machine translation
  • More comprehensive documentation and examples
  • Active community with frequent updates and contributions

Cons of fairseq

  • Narrower scope compared to google-research's diverse range of topics
  • Steeper learning curve for beginners due to specialized nature

Code Comparison

fairseq:

from fairseq.models.transformer import TransformerModel
en2de = TransformerModel.from_pretrained(
    '/path/to/checkpoints',
    checkpoint_file='checkpoint_best.pt',
    data_name_or_path='data-bin/wmt16_en_de_bpe32k'
)

google-research:

import tensorflow as tf
from official.nlp.modeling import models
encoder = models.BertEncoder(...)
outputs = encoder(input_tensor)

fairseq focuses on providing pre-built models and utilities for sequence modeling tasks, while google-research offers a broader range of research implementations across various domains. fairseq's code is more specialized for NLP tasks, whereas google-research's code covers a wider array of machine learning applications.

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.

Pros of transformers

  • Focused specifically on transformer models, providing a comprehensive library for NLP tasks
  • Extensive documentation and community support, making it more accessible for beginners
  • Regular updates and integration with popular deep learning frameworks

Cons of transformers

  • Limited scope compared to google-research, which covers a broader range of research topics
  • May not include cutting-edge research as quickly as google-research

Code comparison

transformers:

from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertModel.from_pretrained('bert-base-uncased')

google-research:

import tensorflow as tf
from bert import modeling
bert_config = modeling.BertConfig.from_json_file("bert_config.json")
model = modeling.BertModel(config=bert_config, is_training=False)

The transformers library offers a more streamlined API for working with pre-trained models, while google-research provides lower-level implementations that may require more setup but offer greater flexibility for research purposes.

34,658

DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.

Pros of DeepSpeed

  • Focused on optimizing and scaling deep learning training
  • Provides ZeRO optimizer for efficient large model training
  • Offers comprehensive documentation and tutorials

Cons of DeepSpeed

  • Narrower scope compared to Google Research's diverse projects
  • May require more setup and configuration for specific use cases

Code Comparison

DeepSpeed:

import deepspeed
model_engine, optimizer, _, _ = deepspeed.initialize(
    args=args,
    model=model,
    model_parameters=params
)

Google Research:

import tensorflow as tf
model = tf.keras.Sequential([
    tf.keras.layers.Dense(64, activation='relu'),
    tf.keras.layers.Dense(10, activation='softmax')
])

Key Differences

  • DeepSpeed focuses on optimizing deep learning training, while Google Research covers a broader range of AI/ML topics
  • DeepSpeed provides specific tools for large-scale model training, whereas Google Research offers a variety of research projects and implementations
  • Google Research repository includes more diverse algorithms and techniques across multiple domains

Use Cases

  • DeepSpeed: Ideal for researchers and practitioners working on large-scale deep learning models and distributed training
  • Google Research: Suitable for exploring various AI/ML research topics and implementing cutting-edge algorithms across different domains
82,049

Tensors and Dynamic neural networks in Python with strong GPU acceleration

Pros of PyTorch

  • More focused and cohesive project, specifically for deep learning
  • Larger community and more widespread adoption in industry and academia
  • Better documentation and tutorials for beginners

Cons of PyTorch

  • Narrower scope, primarily focused on deep learning
  • Less diverse range of research topics and experimental projects

Code Comparison

PyTorch:

import torch

x = torch.tensor([1, 2, 3])
y = torch.tensor([4, 5, 6])
z = torch.add(x, y)

google-research:

import tensorflow as tf

x = tf.constant([1, 2, 3])
y = tf.constant([4, 5, 6])
z = tf.add(x, y)

Summary

PyTorch is a more focused deep learning framework with a larger community and better documentation. google-research is a broader collection of research projects covering various AI and ML topics. PyTorch is more suitable for those specifically interested in deep learning, while google-research offers a wider range of experimental projects and research areas. The code comparison shows similar syntax for basic operations, with PyTorch using its own library and google-research often utilizing TensorFlow.

34,643

A toolkit for developing and comparing reinforcement learning algorithms.

Pros of gym

  • Focused specifically on reinforcement learning environments
  • Well-documented API with consistent interface across environments
  • Active community and widespread adoption in RL research

Cons of gym

  • Narrower scope compared to google-research's diverse projects
  • Less frequent updates and maintenance in recent years
  • Limited to Python, while google-research includes multiple languages

Code Comparison

gym example:

import gym
env = gym.make('CartPole-v1')
observation = env.reset()
for _ in range(1000):
    action = env.action_space.sample()
    observation, reward, done, info = env.step(action)

google-research example (TensorFlow Probability):

import tensorflow_probability as tfp
tfd = tfp.distributions
normal = tfd.Normal(loc=0., scale=1.)
z = normal.sample([10])
log_prob = normal.log_prob(z)

Both repositories provide valuable resources for AI researchers and practitioners. gym offers a standardized platform for reinforcement learning experiments, while google-research covers a broader range of AI topics and tools. The choice between them depends on the specific research needs and areas of focus.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

Google Research

This repository contains code released by Google Research.

All datasets in this repository are released under the CC BY 4.0 International license, which can be found here: https://creativecommons.org/licenses/by/4.0/legalcode. All source files in this repository are released under the Apache 2.0 license, the text of which can be found in the LICENSE file.


Because the repo is large, we recommend you download only the subdirectory of interest:

  • Use GitHub editor to open the project. To open the editor change the url from github.com to github.dev in the address bar.
  • In the left navigation panel, right-click on the folder of interest and select download.

If you'd like to submit a pull request, you'll need to clone the repository; we recommend making a shallow clone (without history).

git clone git@github.com:google-research/google-research.git --depth=1

Disclaimer: This is not an official Google product.

Updated in 2023.