Convert Figma logo to code with AI

NVIDIA logoDeepLearningExamples

State-of-the-Art Deep Learning scripts organized by models - easy to train and deploy with reproducible accuracy and performance on enterprise-grade infrastructure.

13,377
3,200
13,377
341

Top Related Projects

77,006

Models and examples built with TensorFlow

22,218

A set of examples around pytorch in Vision, Text, Reinforcement Learning, etc.

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.

34,658

DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.

30,331

Facebook AI Research Sequence-to-Sequence Toolkit written in Python.

Quick Overview

NVIDIA/DeepLearningExamples is a GitHub repository that provides state-of-the-art deep learning examples optimized for NVIDIA GPUs. It includes scripts, models, and documentation for various deep learning tasks across different frameworks like PyTorch, TensorFlow, and MXNet. The repository aims to showcase best practices and high-performance implementations for AI researchers and developers.

Pros

  • Optimized for NVIDIA GPUs, ensuring high performance and efficiency
  • Covers a wide range of deep learning tasks and popular frameworks
  • Includes detailed documentation and performance benchmarks
  • Regularly updated with new models and techniques

Cons

  • Primarily focused on NVIDIA hardware, which may limit usefulness for users with other GPU brands
  • Some examples may require significant computational resources
  • Learning curve can be steep for beginners in deep learning
  • Not all examples are maintained at the same frequency

Code Examples

  1. Loading a pre-trained BERT model using PyTorch:
from transformers import BertModel, BertTokenizer

model_name = "bert-base-uncased"
tokenizer = BertTokenizer.from_pretrained(model_name)
model = BertModel.from_pretrained(model_name)

input_text = "Example sentence for BERT."
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model(**inputs)
  1. Training a ResNet50 model on ImageNet using TensorFlow:
import tensorflow as tf
from tensorflow.keras.applications import ResNet50
from tensorflow.keras.optimizers import SGD

model = ResNet50(weights=None, classes=1000)
optimizer = SGD(learning_rate=0.1, momentum=0.9)

model.compile(optimizer=optimizer,
              loss='sparse_categorical_crossentropy',
              metrics=['accuracy'])

model.fit(train_dataset, epochs=90, validation_data=val_dataset)
  1. Implementing NVIDIA Apex for mixed precision training in PyTorch:
import torch
from apex import amp

model = YourModel()
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)

model, optimizer = amp.initialize(model, optimizer, opt_level="O1")

for epoch in range(num_epochs):
    for data, target in train_loader:
        optimizer.zero_grad()
        output = model(data)
        loss = criterion(output, target)
        with amp.scale_loss(loss, optimizer) as scaled_loss:
            scaled_loss.backward()
        optimizer.step()

Getting Started

To get started with NVIDIA/DeepLearningExamples:

  1. Clone the repository:

    git clone https://github.com/NVIDIA/DeepLearningExamples.git
    cd DeepLearningExamples
    
  2. Choose a specific example (e.g., BERT for PyTorch):

    cd PyTorch/LanguageModeling/BERT
    
  3. Follow the README instructions for setting up the environment and running the example:

    # Create and activate a new conda environment
    conda env create -f requirements.yml
    conda activate nvidia_bert_pytorch
    
    # Run the training script
    python run_pretraining.py --input_dir /path/to/your/data
    

Note: Specific instructions may vary depending on the chosen example and framework.

Competitor Comparisons

77,006

Models and examples built with TensorFlow

Pros of TensorFlow Models

  • Broader range of models and applications, covering various domains
  • More extensive documentation and community support
  • Regular updates and contributions from the TensorFlow team

Cons of TensorFlow Models

  • Less focus on GPU optimization compared to DeepLearningExamples
  • May require more setup and configuration for high-performance scenarios
  • Some models might not be as production-ready as those in DeepLearningExamples

Code Comparison

DeepLearningExamples:

import torch
from apex import amp

model, optimizer = amp.initialize(model, optimizer, opt_level="O1")
with amp.scale_loss(loss, optimizer) as scaled_loss:
    scaled_loss.backward()

TensorFlow Models:

import tensorflow as tf

with tf.GradientTape() as tape:
    predictions = model(inputs, training=True)
    loss = loss_function(labels, predictions)
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))

The DeepLearningExamples code showcases NVIDIA's Apex library for mixed precision training, while TensorFlow Models uses standard TensorFlow operations. DeepLearningExamples focuses on GPU optimization, whereas TensorFlow Models provides a more general approach suitable for various hardware configurations.

22,218

A set of examples around pytorch in Vision, Text, Reinforcement Learning, etc.

Pros of PyTorch Examples

  • Simpler, more beginner-friendly implementations
  • Wider range of basic deep learning models and tasks
  • More frequently updated with community contributions

Cons of PyTorch Examples

  • Less focus on performance optimization
  • Fewer industry-scale, production-ready examples
  • Limited support for distributed training and multi-GPU setups

Code Comparison

DeepLearningExamples (BERT fine-tuning):

model = BertForSequenceClassification.from_pretrained(
    "bert-base-uncased", num_labels=num_labels)
model = model.to(device)
if args.n_gpu > 1:
    model = torch.nn.DataParallel(model)

PyTorch Examples (MNIST classification):

model = Net().to(device)
optimizer = optim.SGD(model.parameters(), lr=args.lr)
for epoch in range(1, args.epochs + 1):
    train(args, model, device, train_loader, optimizer, epoch)
    test(model, device, test_loader)

The DeepLearningExamples code shows more advanced features like multi-GPU support, while PyTorch Examples focuses on simplicity and readability for basic tasks.

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.

Pros of transformers

  • Extensive library of pre-trained models for various NLP tasks
  • Active community and frequent updates
  • Easy-to-use API for fine-tuning and inference

Cons of transformers

  • Less focus on performance optimization for specific hardware
  • May require additional setup for distributed training
  • Limited examples for non-NLP tasks

Code Comparison

transformers:

from transformers import AutoModelForSequenceClassification, AutoTokenizer

model = AutoModelForSequenceClassification.from_pretrained("bert-base-uncased")
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
inputs = tokenizer("Hello, world!", return_tensors="pt")
outputs = model(**inputs)

DeepLearningExamples:

import torch
from model.bert import BertForSequenceClassification

model = BertForSequenceClassification.from_pretrained("bert-base-uncased")
tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
inputs = tokenizer("Hello, world!", return_tensors="pt")
outputs = model(**inputs)

The code comparison shows that transformers provides a more streamlined API for loading and using pre-trained models, while DeepLearningExamples may require more manual setup and configuration. However, DeepLearningExamples often includes optimizations for NVIDIA hardware and distributed training scenarios.

34,658

DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.

Pros of DeepSpeed

  • Offers more advanced optimization techniques like ZeRO (Zero Redundancy Optimizer)
  • Provides better support for distributed training across multiple GPUs and nodes
  • Includes a more comprehensive set of tools for large-scale model training

Cons of DeepSpeed

  • Steeper learning curve due to more complex features and configurations
  • May require more setup and fine-tuning for optimal performance
  • Less focus on providing ready-to-use examples for specific models or tasks

Code Comparison

DeepSpeed:

model_engine, optimizer, _, _ = deepspeed.initialize(
    args=args,
    model=model,
    model_parameters=params
)

DeepLearningExamples:

model = torch.nn.parallel.DistributedDataParallel(model)
optimizer = optim.SGD(model.parameters(), lr=args.lr)

Summary

DeepSpeed offers more advanced features for large-scale model training and optimization, while DeepLearningExamples provides a simpler approach with ready-to-use examples. DeepSpeed may require more setup but offers better scalability, while DeepLearningExamples is easier to get started with for specific tasks. The choice between them depends on the project's scale and requirements.

30,331

Facebook AI Research Sequence-to-Sequence Toolkit written in Python.

Pros of fairseq

  • More focused on sequence-to-sequence learning and natural language processing tasks
  • Offers a wider range of pre-trained models and benchmarks for NLP
  • Provides a flexible and modular architecture for easier customization

Cons of fairseq

  • Less emphasis on other deep learning domains (e.g., computer vision, speech recognition)
  • May have a steeper learning curve for beginners due to its more specialized nature
  • Potentially less optimized for NVIDIA hardware compared to DeepLearningExamples

Code Comparison

fairseq:

from fairseq.models.transformer import TransformerModel
model = TransformerModel.from_pretrained('/path/to/model', 'checkpoint.pt')
model.translate('Hello world!')

DeepLearningExamples:

from nvidia.transformer import TransformerModel
model = TransformerModel.from_pretrained('nvidia_transformer_large')
model.translate('Hello world!')

Both repositories provide high-quality implementations of deep learning models, but they cater to different needs. fairseq is more specialized for NLP tasks, while DeepLearningExamples covers a broader range of deep learning applications with a focus on NVIDIA hardware optimization.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

NVIDIA Deep Learning Examples for Tensor Cores

Introduction

This repository provides State-of-the-Art Deep Learning examples that are easy to train and deploy, achieving the best reproducible accuracy and performance with NVIDIA CUDA-X software stack running on NVIDIA Volta, Turing and Ampere GPUs.

NVIDIA GPU Cloud (NGC) Container Registry

These examples, along with our NVIDIA deep learning software stack, are provided in a monthly updated Docker container on the NGC container registry (https://ngc.nvidia.com). These containers include:

  • The latest NVIDIA examples from this repository
  • The latest NVIDIA contributions shared upstream to the respective framework
  • The latest NVIDIA Deep Learning software libraries, such as cuDNN, NCCL, cuBLAS, etc. which have all been through a rigorous monthly quality assurance process to ensure that they provide the best possible performance
  • Monthly release notes for each of the NVIDIA optimized containers

Computer Vision

ModelsFrameworkAMPMulti-GPUMulti-NodeTensorRTONNXTritonDLCNB
EfficientNet-B0PyTorchYesYes-Supported-SupportedYes-
EfficientNet-B4PyTorchYesYes-Supported-SupportedYes-
EfficientNet-WideSE-B0PyTorchYesYes-Supported-SupportedYes-
EfficientNet-WideSE-B4PyTorchYesYes-Supported-SupportedYes-
EfficientNet v1-B0TensorFlow2YesYesYesExample-SupportedYes-
EfficientNet v1-B4TensorFlow2YesYesYesExample-SupportedYes-
EfficientNet v2-STensorFlow2YesYesYesExample-SupportedYes-
GPUNetPyTorchYesYes-ExampleYesExampleYes-
Mask R-CNNPyTorchYesYes-Example-Supported-Yes
Mask R-CNNTensorFlow2YesYes-Example-SupportedYes-
nnUNetPyTorchYesYes-Supported-SupportedYes-
ResNet-50MXNetYesYes-Supported-Supported--
ResNet-50PaddlePaddleYesYes-Example-Supported--
ResNet-50PyTorchYesYes-Example-ExampleYes-
ResNet-50TensorFlowYesYes-Supported-SupportedYes-
ResNeXt-101PyTorchYesYes-Example-ExampleYes-
ResNeXt-101TensorFlowYesYes-Supported-SupportedYes-
SE-ResNeXt-101PyTorchYesYes-Example-ExampleYes-
SE-ResNeXt-101TensorFlowYesYes-Supported-SupportedYes-
SSDPyTorchYesYes-Supported-Supported-Yes
SSDTensorFlowYesYes-Supported-SupportedYesYes
U-Net MedTensorFlow2YesYes-Example-SupportedYes-

Natural Language Processing

ModelsFrameworkAMPMulti-GPUMulti-NodeTensorRTONNXTritonDLCNB
BERTPyTorchYesYesYesExample-ExampleYes-
GNMTPyTorchYesYes-Supported-Supported--
ELECTRATensorFlow2YesYesYesSupported-SupportedYes-
BERTTensorFlowYesYesYesExample-ExampleYesYes
BERTTensorFlow2YesYesYesSupported-SupportedYes-
GNMTTensorFlowYesYes-Supported-Supported--
Faster TransformerTensorflow---Example-Supported--

Recommender Systems

ModelsFrameworkAMPMulti-GPUMulti-NodeONNXTritonDLCNB
DLRMPyTorchYesYes-YesExampleYesYes
DLRMTensorFlow2YesYesYes-SupportedYes-
NCFPyTorchYesYes--Supported--
Wide&DeepTensorFlowYesYes--SupportedYes-
Wide&DeepTensorFlow2YesYes--SupportedYes-
NCFTensorFlowYesYes--SupportedYes-
VAE-CFTensorFlowYesYes--Supported--
SIMTensorFlow2YesYes--SupportedYes-

Speech to Text

ModelsFrameworkAMPMulti-GPUMulti-NodeTensorRTONNXTritonDLCNB
JasperPyTorchYesYes-ExampleYesExampleYesYes
QuartzNetPyTorchYesYes-Supported-SupportedYes-

Text to Speech

ModelsFrameworkAMPMulti-GPUMulti-NodeTensorRTONNXTritonDLCNB
FastPitchPyTorchYesYes-Example-ExampleYesYes
FastSpeechPyTorchYesYes-Example-Supported--
Tacotron 2 and WaveGlowPyTorchYesYes-ExampleYesExampleYes-
HiFi-GANPyTorchYesYes-Supported-SupportedYes-

Graph Neural Networks

ModelsFrameworkAMPMulti-GPUMulti-NodeONNXTritonDLCNB
SE(3)-TransformerPyTorchYesYes--Supported--
MoFlowPyTorchYesYes--Supported--

Time-Series Forecasting

ModelsFrameworkAMPMulti-GPUMulti-NodeTensorRTONNXTritonDLCNB
Temporal Fusion TransformerPyTorchYesYes-ExampleYesExampleYes-

NVIDIA support

In each of the network READMEs, we indicate the level of support that will be provided. The range is from ongoing updates and improvements to a point-in-time release for thought leadership.

Glossary

Multinode Training Supported on a pyxis/enroot Slurm cluster.

Deep Learning Compiler (DLC) TensorFlow XLA and PyTorch JIT and/or TorchScript

Accelerated Linear Algebra (XLA) XLA is a domain-specific compiler for linear algebra that can accelerate TensorFlow models with potentially no source code changes. The results are improvements in speed and memory usage.

PyTorch JIT and/or TorchScript TorchScript is a way to create serializable and optimizable models from PyTorch code. TorchScript, an intermediate representation of a PyTorch model (subclass of nn.Module) that can then be run in a high-performance environment such as C++.

Automatic Mixed Precision (AMP) Automatic Mixed Precision (AMP) enables mixed precision training on Volta, Turing, and NVIDIA Ampere GPU architectures automatically.

TensorFloat-32 (TF32) TensorFloat-32 (TF32) is the new math mode in NVIDIA A100 GPUs for handling the matrix math also called tensor operations. TF32 running on Tensor Cores in A100 GPUs can provide up to 10x speedups compared to single-precision floating-point math (FP32) on Volta GPUs. TF32 is supported in the NVIDIA Ampere GPU architecture and is enabled by default.

Jupyter Notebooks (NB) The Jupyter Notebook is an open-source web application that allows you to create and share documents that contain live code, equations, visualizations and narrative text.

Feedback / Contributions

We're posting these examples on GitHub to better support the community, facilitate feedback, as well as collect and implement contributions using GitHub Issues and pull requests. We welcome all contributions!

Known issues

In each of the network READMEs, we indicate any known issues and encourage the community to provide feedback.