Convert Figma logo to code with AI

pytorch logoexamples

A set of examples around pytorch in Vision, Text, Reinforcement Learning, etc.

22,218
9,514
22,218
219

Top Related Projects

7,697

A collection of pre-trained, state-of-the-art models in the ONNX format

34,658

DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.

Google Research

Quick Overview

The pytorch/examples repository on GitHub is a collection of example code and tutorials for the PyTorch deep learning library. It provides a wide range of sample projects and applications that demonstrate the capabilities of PyTorch, making it a valuable resource for both beginners and experienced PyTorch users.

Pros

  • Comprehensive Examples: The repository covers a diverse set of machine learning and deep learning tasks, including image classification, natural language processing, reinforcement learning, and more.
  • Beginner-Friendly: The examples are well-documented and often include step-by-step instructions, making it easier for newcomers to understand and get started with PyTorch.
  • Active Development: The repository is actively maintained by the PyTorch team and the broader open-source community, ensuring that the examples stay up-to-date with the latest PyTorch features and best practices.
  • Customizable: The examples can be easily modified and extended, allowing users to adapt them to their specific needs and use cases.

Cons

  • Lack of Consistency: The examples in the repository may vary in terms of code style, documentation quality, and overall organization, which can make it challenging for users to navigate the codebase.
  • Potential Outdated Content: As the PyTorch library evolves, some of the examples may become outdated or require updates to work with the latest version of the library.
  • Limited Scope: While the repository covers a wide range of topics, it may not provide examples for every possible use case or application of PyTorch.
  • Steep Learning Curve: For complete beginners, the examples may still require a certain level of familiarity with PyTorch and machine learning concepts.

Code Examples

Here are a few code examples from the pytorch/examples repository:

Image Classification with CIFAR10

import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms

class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.conv1 = nn.Conv2d(3, 6, 5)
        self.pool = nn.MaxPool2d(2, 2)
        self.conv2 = nn.Conv2d(6, 16, 5)
        self.fc1 = nn.Linear(16 * 5 * 5, 120)
        self.fc2 = nn.Linear(120, 84)
        self.fc3 = nn.Linear(84, 10)

    def forward(self, x):
        x = self.pool(F.relu(self.conv1(x)))
        x = self.pool(F.relu(self.conv2(x)))
        x = x.view(-1, 16 * 5 * 5)
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = self.fc3(x)
        return x

This example demonstrates a simple convolutional neural network for image classification on the CIFAR10 dataset.

Language Modeling with Transformer

import torch.nn as nn
from torch.nn import TransformerEncoder, TransformerEncoderLayer

class TransformerModel(nn.Module):
    def __init__(self, ntoken, ninp, nhead, nhid, nlayers, dropout=0.5):
        super(TransformerModel, self).__init__()
        self.model_type = 'Transformer'
        self.pos_encoder = PositionalEncoding(ninp, dropout)
        encoder_layers = TransformerEncoderLayer(ninp, nhead, nhid, dropout)
        self.transformer_encoder = TransformerEncoder(encoder_layers, nlayers)
        self.encoder = nn.Embedding(ntoken, ninp)
        self.ninp = ninp
        self.decoder = nn.Linear(ninp, ntoken)

        self.init_weights()

    def forward(self, src, src_mask):
        src = self.encoder(src) * math.sqrt(self.ninp)
        src = self.pos_encoder(src)
        output =

Competitor Comparisons

7,697

A collection of pre-trained, state-of-the-art models in the ONNX format

Pros of models

  • Broader ecosystem support: ONNX models can be used across multiple frameworks
  • Larger variety of pre-trained models available
  • Focus on model interoperability and deployment

Cons of models

  • Less comprehensive documentation and tutorials
  • Fewer code examples for training and fine-tuning
  • Limited to inference-only use cases in some scenarios

Code Comparison

models:

import onnx
import onnxruntime as ort

model = onnx.load("model.onnx")
session = ort.InferenceSession(model.SerializeToString())
output = session.run(None, {"input": input_data})

examples:

import torch
import torchvision.models as models

model = models.resnet18(pretrained=True)
output = model(input_tensor)

The examples repository provides more comprehensive PyTorch-specific code examples for various tasks, while the models repository focuses on providing pre-trained ONNX models for inference across different frameworks. examples is better suited for learning PyTorch and developing new models, whereas models is ideal for deploying pre-trained models in production environments with framework flexibility.

34,658

DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.

Pros of DeepSpeed

  • Offers advanced optimization techniques for large-scale model training
  • Provides built-in distributed training capabilities
  • Includes memory-efficient optimizers and gradient compression

Cons of DeepSpeed

  • Steeper learning curve due to more complex features
  • May require more configuration for optimal performance
  • Less suitable for simple or small-scale projects

Code Comparison

DeepSpeed:

model_engine, optimizer, _, _ = deepspeed.initialize(
    args=args,
    model=model,
    model_parameters=params
)

PyTorch Examples:

optimizer = optim.SGD(model.parameters(), lr=0.01)

Key Differences

DeepSpeed focuses on scalability and efficiency for large models, while PyTorch Examples provides simpler implementations for learning and experimentation. DeepSpeed offers more advanced features like ZeRO optimizer and pipeline parallelism, whereas PyTorch Examples demonstrates basic usage of PyTorch functionalities.

Use Cases

  • DeepSpeed: Large-scale model training, distributed computing environments
  • PyTorch Examples: Learning PyTorch basics, prototyping, smaller projects

Community and Support

DeepSpeed has a growing community and is actively developed by Microsoft. PyTorch Examples is maintained by the PyTorch team and serves as a reference for the broader PyTorch ecosystem.

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.

Pros of transformers

  • Extensive library of pre-trained models for various NLP tasks
  • High-level APIs for easy fine-tuning and deployment
  • Active community and frequent updates

Cons of transformers

  • Steeper learning curve for beginners
  • Larger package size and dependencies
  • May be overkill for simple NLP tasks

Code Comparison

transformers:

from transformers import BertTokenizer, BertForSequenceClassification
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertForSequenceClassification.from_pretrained('bert-base-uncased')
inputs = tokenizer("Hello, world!", return_tensors="pt")
outputs = model(**inputs)

examples:

import torch
import torch.nn as nn

class SimpleModel(nn.Module):
    def __init__(self):
        super(SimpleModel, self).__init__()
        self.linear = nn.Linear(10, 1)

    def forward(self, x):
        return self.linear(x)

The transformers code showcases the ease of using pre-trained models, while the examples code demonstrates basic PyTorch model creation. transformers offers a higher-level abstraction, whereas examples provides more flexibility for custom architectures.

Google Research

Pros of google-research

  • Broader scope, covering a wide range of research topics beyond just machine learning
  • More extensive collection of projects and implementations
  • Regularly updated with cutting-edge research from Google's teams

Cons of google-research

  • Less focused on providing educational examples for beginners
  • May be more challenging to navigate due to its large size and diverse content
  • Some projects might be more experimental or research-oriented, potentially less production-ready

Code Comparison

google-research (TensorFlow-based):

import tensorflow as tf

model = tf.keras.Sequential([
  tf.keras.layers.Dense(64, activation='relu'),
  tf.keras.layers.Dense(10, activation='softmax')
])

pytorch/examples:

import torch.nn as nn

model = nn.Sequential(
  nn.Linear(784, 64),
  nn.ReLU(),
  nn.Linear(64, 10),
  nn.Softmax(dim=1)
)

The code snippets demonstrate the difference in syntax and structure between TensorFlow (commonly used in google-research) and PyTorch (used in pytorch/examples) for creating a simple neural network model.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

PyTorch Examples

Run Examples

https://pytorch.org/examples/

pytorch/examples is a repository showcasing examples of using PyTorch. The goal is to have curated, short, few/no dependencies high quality examples that are substantially different from each other that can be emulated in your existing work.

Available models

Additionally, a list of good examples hosted in their own repositories:

Contributing

If you'd like to contribute your own example or fix a bug please make sure to take a look at CONTRIBUTING.md.