examples
A set of examples around pytorch in Vision, Text, Reinforcement Learning, etc.
Top Related Projects
A collection of pre-trained, state-of-the-art models in the ONNX format
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Google Research
Quick Overview
The pytorch/examples
repository on GitHub is a collection of example code and tutorials for the PyTorch deep learning library. It provides a wide range of sample projects and applications that demonstrate the capabilities of PyTorch, making it a valuable resource for both beginners and experienced PyTorch users.
Pros
- Comprehensive Examples: The repository covers a diverse set of machine learning and deep learning tasks, including image classification, natural language processing, reinforcement learning, and more.
- Beginner-Friendly: The examples are well-documented and often include step-by-step instructions, making it easier for newcomers to understand and get started with PyTorch.
- Active Development: The repository is actively maintained by the PyTorch team and the broader open-source community, ensuring that the examples stay up-to-date with the latest PyTorch features and best practices.
- Customizable: The examples can be easily modified and extended, allowing users to adapt them to their specific needs and use cases.
Cons
- Lack of Consistency: The examples in the repository may vary in terms of code style, documentation quality, and overall organization, which can make it challenging for users to navigate the codebase.
- Potential Outdated Content: As the PyTorch library evolves, some of the examples may become outdated or require updates to work with the latest version of the library.
- Limited Scope: While the repository covers a wide range of topics, it may not provide examples for every possible use case or application of PyTorch.
- Steep Learning Curve: For complete beginners, the examples may still require a certain level of familiarity with PyTorch and machine learning concepts.
Code Examples
Here are a few code examples from the pytorch/examples
repository:
Image Classification with CIFAR10
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
This example demonstrates a simple convolutional neural network for image classification on the CIFAR10 dataset.
Language Modeling with Transformer
import torch.nn as nn
from torch.nn import TransformerEncoder, TransformerEncoderLayer
class TransformerModel(nn.Module):
def __init__(self, ntoken, ninp, nhead, nhid, nlayers, dropout=0.5):
super(TransformerModel, self).__init__()
self.model_type = 'Transformer'
self.pos_encoder = PositionalEncoding(ninp, dropout)
encoder_layers = TransformerEncoderLayer(ninp, nhead, nhid, dropout)
self.transformer_encoder = TransformerEncoder(encoder_layers, nlayers)
self.encoder = nn.Embedding(ntoken, ninp)
self.ninp = ninp
self.decoder = nn.Linear(ninp, ntoken)
self.init_weights()
def forward(self, src, src_mask):
src = self.encoder(src) * math.sqrt(self.ninp)
src = self.pos_encoder(src)
output =
Competitor Comparisons
A collection of pre-trained, state-of-the-art models in the ONNX format
Pros of models
- Broader ecosystem support: ONNX models can be used across multiple frameworks
- Larger variety of pre-trained models available
- Focus on model interoperability and deployment
Cons of models
- Less comprehensive documentation and tutorials
- Fewer code examples for training and fine-tuning
- Limited to inference-only use cases in some scenarios
Code Comparison
models:
import onnx
import onnxruntime as ort
model = onnx.load("model.onnx")
session = ort.InferenceSession(model.SerializeToString())
output = session.run(None, {"input": input_data})
examples:
import torch
import torchvision.models as models
model = models.resnet18(pretrained=True)
output = model(input_tensor)
The examples repository provides more comprehensive PyTorch-specific code examples for various tasks, while the models repository focuses on providing pre-trained ONNX models for inference across different frameworks. examples is better suited for learning PyTorch and developing new models, whereas models is ideal for deploying pre-trained models in production environments with framework flexibility.
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
Pros of DeepSpeed
- Optimized for large-scale distributed training and inference
- Offers advanced memory optimization techniques like ZeRO
- Provides built-in support for mixed precision training
Cons of DeepSpeed
- Steeper learning curve due to more complex configuration
- Less beginner-friendly compared to PyTorch examples
- May introduce overhead for smaller models or datasets
Code Comparison
PyTorch examples:
model = Net().to(device)
optimizer = optim.SGD(model.parameters(), lr=args.lr)
for epoch in range(1, args.epochs + 1):
train(args, model, device, train_loader, optimizer, epoch)
test(model, device, test_loader)
DeepSpeed:
model_engine, optimizer, _, _ = deepspeed.initialize(
args=args, model=model, model_parameters=params
)
for epoch in range(args.epochs):
for batch in train_loader:
loss = model_engine(batch)
model_engine.backward(loss)
model_engine.step()
DeepSpeed offers more advanced features for large-scale training, while PyTorch examples provide simpler implementations for learning and prototyping. DeepSpeed requires additional setup but can significantly improve training efficiency for large models and datasets.
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Pros of transformers
- Extensive library of pre-trained models for various NLP tasks
- High-level APIs for easy fine-tuning and deployment
- Active community and frequent updates
Cons of transformers
- Steeper learning curve for beginners
- Larger package size and dependencies
- May be overkill for simple NLP tasks
Code Comparison
transformers:
from transformers import BertTokenizer, BertForSequenceClassification
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertForSequenceClassification.from_pretrained('bert-base-uncased')
inputs = tokenizer("Hello, world!", return_tensors="pt")
outputs = model(**inputs)
examples:
import torch
import torch.nn as nn
class SimpleModel(nn.Module):
def __init__(self):
super(SimpleModel, self).__init__()
self.linear = nn.Linear(10, 1)
def forward(self, x):
return self.linear(x)
The transformers code showcases the ease of using pre-trained models, while the examples code demonstrates basic PyTorch model creation. transformers offers a higher-level abstraction, whereas examples provides more flexibility for custom architectures.
Google Research
Pros of google-research
- Broader scope, covering a wide range of research topics beyond just machine learning
- More extensive collection of projects and implementations
- Regularly updated with cutting-edge research from Google's teams
Cons of google-research
- Less focused on providing educational examples for beginners
- May be more challenging to navigate due to its large size and diverse content
- Some projects might be more experimental or research-oriented, potentially less production-ready
Code Comparison
google-research (TensorFlow-based):
import tensorflow as tf
model = tf.keras.Sequential([
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])
pytorch/examples:
import torch.nn as nn
model = nn.Sequential(
nn.Linear(784, 64),
nn.ReLU(),
nn.Linear(64, 10),
nn.Softmax(dim=1)
)
The code snippets demonstrate the difference in syntax and structure between TensorFlow (commonly used in google-research) and PyTorch (used in pytorch/examples) for creating a simple neural network model.
Convert
designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
PyTorch Examples
pytorch/examples
is a repository showcasing examples of using PyTorch. The goal is to have curated, short, few/no dependencies high quality examples that are substantially different from each other that can be emulated in your existing work.
- For tutorials: https://github.com/pytorch/tutorials
- For changes to pytorch.org: https://github.com/pytorch/pytorch.github.io
- For a general model hub: https://pytorch.org/hub/ or https://huggingface.co/models
- For recipes on how to run PyTorch in production: https://github.com/facebookresearch/recipes
- For general Q&A and support: https://discuss.pytorch.org/
Available models
- Image classification (MNIST) using Convnets
- Word-level Language Modeling using RNN and Transformer
- Training Imagenet Classifiers with Popular Networks
- Generative Adversarial Networks (DCGAN)
- Variational Auto-Encoders
- Superresolution using an efficient sub-pixel convolutional neural network
- Hogwild training of shared ConvNets across multiple processes on MNIST
- Training a CartPole to balance in OpenAI Gym with actor-critic
- Natural Language Inference (SNLI) with GloVe vectors, LSTMs, and torchtext
- Time sequence prediction - use an LSTM to learn Sine waves
- Implement the Neural Style Transfer algorithm on images
- Reinforcement Learning with Actor Critic and REINFORCE algorithms on OpenAI gym
- PyTorch Module Transformations using fx
- Distributed PyTorch examples with Distributed Data Parallel and RPC
- Several examples illustrating the C++ Frontend
- Image Classification Using Forward-Forward
- Language Translation using Transformers
Additionally, a list of good examples hosted in their own repositories:
Contributing
If you'd like to contribute your own example or fix a bug please make sure to take a look at CONTRIBUTING.md.
Top Related Projects
A collection of pre-trained, state-of-the-art models in the ONNX format
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Google Research
Convert
designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot