Convert Figma logo to code with AI

triton-inference-server logoserver

The Triton Inference Server provides an optimized cloud and edge inferencing solution.

8,010
1,437
8,010
582

Top Related Projects

10,668

NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.

4,185

Serve, optimize and scale PyTorch models in production

6,158

A flexible, high-performance serving system for machine learning models

ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator

20,763

Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more

3,447

Standardized Serverless ML Inference Platform on Kubernetes

Quick Overview

The Triton Inference Server is an open-source project that provides a flexible, scalable solution for deploying machine learning models in production environments. It supports multiple deep learning frameworks and optimizes model serving for various hardware platforms, enabling efficient inference across CPUs, GPUs, and other accelerators.

Pros

  • Supports multiple frameworks (TensorFlow, PyTorch, ONNX, etc.) and custom backends
  • Provides dynamic batching and model versioning for improved performance
  • Offers concurrent model execution and GPU sharing capabilities
  • Includes built-in metrics and monitoring features for easy integration with observability tools

Cons

  • Steep learning curve for complex deployments and custom configurations
  • Limited support for some specialized AI/ML frameworks
  • Requires careful tuning for optimal performance in large-scale deployments
  • Documentation can be overwhelming for beginners

Getting Started

To get started with Triton Inference Server, follow these steps:

  1. Install Docker on your system.
  2. Pull the Triton Docker image:
    docker pull nvcr.io/nvidia/tritonserver:22.12-py3
    
  3. Prepare your model repository with the required directory structure.
  4. Start the Triton server:
    docker run --gpus=all -it --shm-size=256m --rm -p8000:8000 -p8001:8001 -p8002:8002 -v /path/to/model/repository:/models nvcr.io/nvidia/tritonserver:22.12-py3 tritonserver --model-repository=/models
    
  5. Use the Triton client libraries or HTTP/gRPC endpoints to send inference requests to your deployed models.

For more detailed instructions and advanced configurations, refer to the official Triton Inference Server documentation.

Competitor Comparisons

10,668

NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.

Pros of TensorRT

  • Specialized for NVIDIA GPUs, offering highly optimized performance
  • Provides deep learning inference optimization and runtime
  • Supports a wide range of deep learning frameworks

Cons of TensorRT

  • Limited to NVIDIA hardware, less flexible for diverse deployments
  • Steeper learning curve compared to Triton's more user-friendly approach
  • Requires more manual optimization and tuning

Code Comparison

TensorRT example:

IBuilder* builder = createInferBuilder(gLogger);
INetworkDefinition* network = builder->createNetworkV2(1U << static_cast<uint32_t>(NetworkDefinitionCreationFlag::kEXPLICIT_BATCH));
// ... (network definition)
IBuilderConfig* config = builder->createBuilderConfig();
config->setMaxWorkspaceSize(1 << 20);
ICudaEngine* engine = builder->buildEngineWithConfig(*network, *config);

Triton Inference Server example:

import tritonclient.grpc as grpcclient

client = grpcclient.InferenceServerClient(url="localhost:8001")
inputs = [grpcclient.InferInput("INPUT0", [1, 16], "FP32")]
# ... (prepare inputs)
result = client.infer(model_name="my_model", inputs=inputs)

The TensorRT code focuses on low-level engine creation and optimization, while Triton provides a higher-level client interface for inference requests.

4,185

Serve, optimize and scale PyTorch models in production

Pros of PyTorch Serve

  • Tighter integration with PyTorch ecosystem
  • Simpler setup and deployment for PyTorch models
  • Built-in model versioning and A/B testing capabilities

Cons of PyTorch Serve

  • Limited support for non-PyTorch frameworks
  • Less optimized for high-performance, multi-framework deployments
  • Fewer advanced features compared to Triton (e.g., dynamic batching, model ensembles)

Code Comparison

PyTorch Serve:

import torch
from torchserve.torch_handler.base_handler import BaseHandler

class MyHandler(BaseHandler):
    def preprocess(self, data):
        return torch.tensor(data)

Triton Inference Server:

import triton_python_backend_utils as pb_utils

class TritonPythonModel:
    def execute(self, requests):
        responses = []
        for request in requests:
            # Process each request

Both servers offer Python-based custom handlers, but Triton provides more flexibility for multi-framework support and complex inference pipelines.

6,158

A flexible, high-performance serving system for machine learning models

Pros of TensorFlow Serving

  • Native integration with TensorFlow models
  • Optimized for TensorFlow-specific operations
  • Supports model versioning and hot reloading

Cons of TensorFlow Serving

  • Limited support for non-TensorFlow models
  • Less flexible deployment options compared to Triton
  • Steeper learning curve for non-TensorFlow users

Code Comparison

TensorFlow Serving:

import tensorflow as tf
model = tf.saved_model.load("/path/to/model")
result = model(tf.constant([[1.0, 2.0, 3.0]]))

Triton Inference Server:

import tritonclient.http as httpclient
client = httpclient.InferenceServerClient("localhost:8000")
inputs = [httpclient.InferInput("input", [1, 3], "FP32")]
inputs[0].set_data_from_numpy(np.array([[1.0, 2.0, 3.0]], dtype=np.float32))
result = client.infer("model_name", inputs)

Both servers provide efficient model serving capabilities, but Triton offers broader model support and deployment flexibility, while TensorFlow Serving excels in TensorFlow-specific optimizations and versioning.

ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator

Pros of ONNX Runtime

  • Broader model format support, including ONNX, TensorFlow, and PyTorch
  • Extensive optimizations for various hardware platforms (CPU, GPU, IoT devices)
  • Easier integration into existing ML pipelines and applications

Cons of ONNX Runtime

  • Less focus on distributed inference and multi-model serving
  • May require more manual configuration for complex deployment scenarios
  • Limited built-in support for advanced serving features like model versioning

Code Comparison

ONNX Runtime:

import onnxruntime as ort
session = ort.InferenceSession("model.onnx")
input_name = session.get_inputs()[0].name
output = session.run(None, {input_name: input_data})

Triton Inference Server:

import tritonclient.http as httpclient
client = httpclient.InferenceServerClient(url="localhost:8000")
inputs = [httpclient.InferInput("input", input_data.shape, "FP32")]
inputs[0].set_data_from_numpy(input_data)
result = client.infer("model_name", inputs)

Both repositories offer powerful inference capabilities, but ONNX Runtime is more focused on optimizing individual model performance across various hardware, while Triton Inference Server excels in managing complex, multi-model serving scenarios with advanced deployment features.

20,763

Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more

Pros of MXNet

  • More comprehensive deep learning framework with support for multiple programming languages
  • Offers a wider range of built-in neural network architectures and algorithms
  • Provides flexible symbolic and imperative programming paradigms

Cons of MXNet

  • Steeper learning curve due to its broader scope and feature set
  • Less focused on inference serving compared to Triton Inference Server
  • May require more setup and configuration for deployment in production environments

Code Comparison

MXNet example (Python):

import mxnet as mx
from mxnet import gluon, autograd

# Define and train a simple neural network
net = gluon.nn.Sequential()
net.add(gluon.nn.Dense(10, activation='relu'))
net.add(gluon.nn.Dense(2))
net.initialize()

Triton Inference Server example (Python client):

import tritonclient.http as httpclient

client = httpclient.InferenceServerClient(url="localhost:8000")
inputs = [httpclient.InferInput("input", [1, 3, 224, 224], "FP32")]
outputs = [httpclient.InferRequestedOutput("output")]
result = client.infer("model_name", inputs, outputs=outputs)

These code snippets highlight the different focus areas of the two projects: MXNet as a comprehensive deep learning framework and Triton Inference Server as a dedicated inference serving solution.

3,447

Standardized Serverless ML Inference Platform on Kubernetes

Pros of KServe

  • Broader ecosystem support: Integrates with Kubernetes, Knative, and various ML frameworks
  • More flexible model serving: Supports multi-model serving and custom runtimes
  • Built-in model management: Offers versioning, canary rollouts, and A/B testing

Cons of KServe

  • Higher complexity: Steeper learning curve due to more components and abstractions
  • Resource overhead: Requires more infrastructure resources for full deployment
  • Less optimized for specific hardware: May not fully leverage specialized accelerators

Code Comparison

KServe example:

apiVersion: "serving.kserve.io/v1beta1"
kind: "InferenceService"
metadata:
  name: "sklearn-iris"
spec:
  predictor:
    sklearn:
      storageUri: "gs://kfserving-samples/models/sklearn/iris"

Triton Inference Server example:

docker run --gpus=1 --rm -p8000:8000 -p8001:8001 -p8002:8002 \
    -v /path/to/model_repository:/models \
    nvcr.io/nvidia/tritonserver:21.09-py3 tritonserver \
    --model-repository=/models

Both repositories focus on model serving, but KServe offers a more comprehensive Kubernetes-native solution with advanced features, while Triton Inference Server provides a lightweight, high-performance option optimized for NVIDIA hardware.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

Triton Inference Server

📣 vLLM x Triton Meetup at Fort Mason on Sept 9th 4:00 - 9:00 pm

We are excited to announce that we will be hosting our Triton user meetup with the vLLM team at Fort Mason on Sept 9th 4:00 - 9:00 pm. Join us for this exclusive event where you will learn about the newest vLLM and Triton features, get a glimpse into the roadmaps, and connect with fellow users, the NVIDIA Triton and vLLM teams. Seating is limited and registration confirmation is required to attend - please register here to join the meetup.


License

[!WARNING]

LATEST RELEASE

You are currently on the main branch which tracks under-development progress towards the next release. The current release is version 2.49.0 and corresponds to the 24.08 container release on NVIDIA GPU Cloud (NGC).

Triton Inference Server is an open source inference serving software that streamlines AI inferencing. Triton enables teams to deploy any AI model from multiple deep learning and machine learning frameworks, including TensorRT, TensorFlow, PyTorch, ONNX, OpenVINO, Python, RAPIDS FIL, and more. Triton Inference Server supports inference across cloud, data center, edge and embedded devices on NVIDIA GPUs, x86 and ARM CPU, or AWS Inferentia. Triton Inference Server delivers optimized performance for many query types, including real time, batched, ensembles and audio/video streaming. Triton inference Server is part of NVIDIA AI Enterprise, a software platform that accelerates the data science pipeline and streamlines the development and deployment of production AI.

Major features include:

New to Triton Inference Server? Make use of these tutorials to begin your Triton journey!

Join the Triton and TensorRT community and stay current on the latest product updates, bug fixes, content, best practices, and more. Need enterprise support? NVIDIA global support is available for Triton Inference Server with the NVIDIA AI Enterprise software suite.

Serve a Model in 3 Easy Steps

# Step 1: Create the example model repository
git clone -b r24.08 https://github.com/triton-inference-server/server.git
cd server/docs/examples
./fetch_models.sh

# Step 2: Launch triton from the NGC Triton container
docker run --gpus=1 --rm --net=host -v ${PWD}/model_repository:/models nvcr.io/nvidia/tritonserver:24.08-py3 tritonserver --model-repository=/models

# Step 3: Sending an Inference Request
# In a separate console, launch the image_client example from the NGC Triton SDK container
docker run -it --rm --net=host nvcr.io/nvidia/tritonserver:24.08-py3-sdk
/workspace/install/bin/image_client -m densenet_onnx -c 3 -s INCEPTION /workspace/images/mug.jpg

# Inference should return the following
Image '/workspace/images/mug.jpg':
    15.346230 (504) = COFFEE MUG
    13.224326 (968) = CUP
    10.422965 (505) = COFFEEPOT

Please read the QuickStart guide for additional information regarding this example. The quickstart guide also contains an example of how to launch Triton on CPU-only systems. New to Triton and wondering where to get started? Watch the Getting Started video.

Examples and Tutorials

Check out NVIDIA LaunchPad for free access to a set of hands-on labs with Triton Inference Server hosted on NVIDIA infrastructure.

Specific end-to-end examples for popular models, such as ResNet, BERT, and DLRM are located in the NVIDIA Deep Learning Examples page on GitHub. The NVIDIA Developer Zone contains additional documentation, presentations, and examples.

Documentation

Build and Deploy

The recommended way to build and use Triton Inference Server is with Docker images.

Using Triton

Preparing Models for Triton Inference Server

The first step in using Triton to serve your models is to place one or more models into a model repository. Depending on the type of the model and on what Triton capabilities you want to enable for the model, you may need to create a model configuration for the model.

Configure and Use Triton Inference Server

Client Support and Examples

A Triton client application sends inference and other requests to Triton. The Python and C++ client libraries provide APIs to simplify this communication.

Extend Triton

Triton Inference Server's architecture is specifically designed for modularity and flexibility

Additional Documentation

Contributing

Contributions to Triton Inference Server are more than welcome. To contribute please review the contribution guidelines. If you have a backend, client, example or similar contribution that is not modifying the core of Triton, then you should file a PR in the contrib repo.

Reporting problems, asking questions

We appreciate any feedback, questions or bug reporting regarding this project. When posting issues in GitHub, follow the process outlined in the Stack Overflow document. Ensure posted examples are:

  • minimal – use as little code as possible that still produces the same problem
  • complete – provide all parts needed to reproduce the problem. Check if you can strip external dependencies and still show the problem. The less time we spend on reproducing problems the more time we have to fix it
  • verifiable – test the code you're about to provide to make sure it reproduces the problem. Remove all other problems that are not related to your request/question.

For issues, please use the provided bug report and feature request templates.

For questions, we recommend posting in our community GitHub Discussions.

For more information

Please refer to the NVIDIA Developer Triton page for more information.