Convert Figma logo to code with AI

onnx logoonnx

Open standard for machine learning interoperability

17,615
3,652
17,615
328

Top Related Projects

ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator

185,446

An Open Source Machine Learning Framework for Everyone

82,049

Tensors and Dynamic neural networks in Python with strong GPU acceleration

11,580

Open deep learning compiler stack for cpu, gpu and specialized accelerators

Core ML tools contain supporting tools for Core ML model conversion, editing, and validation.

29,761

Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more

Quick Overview

ONNX (Open Neural Network Exchange) is an open format to represent machine learning models. It defines a common set of operators and a common file format to enable AI developers to use models with a variety of frameworks, tools, runtimes, and compilers.

Pros

  • Interoperability: Allows models to be easily transferred between different frameworks
  • Ecosystem support: Widely adopted by major ML frameworks and hardware vendors
  • Extensibility: Supports custom operators and metadata for specialized use cases
  • Performance optimization: Enables hardware-specific optimizations across different platforms

Cons

  • Learning curve: Can be complex for beginners due to its comprehensive nature
  • Version compatibility: Different versions of ONNX may have compatibility issues
  • Limited support for some advanced model architectures: Certain cutting-edge architectures may not be fully supported
  • Overhead: Converting models to ONNX format may introduce some performance overhead

Code Examples

  1. Converting a PyTorch model to ONNX:
import torch
import torchvision

# Load a pretrained ResNet model
model = torchvision.models.resnet18(pretrained=True)
model.eval()

# Create a dummy input
dummy_input = torch.randn(1, 3, 224, 224)

# Export the model to ONNX
torch.onnx.export(model, dummy_input, "resnet18.onnx", verbose=True)
  1. Loading and running an ONNX model:
import onnxruntime
import numpy as np

# Load the ONNX model
session = onnxruntime.InferenceSession("resnet18.onnx")

# Prepare input data
input_name = session.get_inputs()[0].name
input_data = np.random.randn(1, 3, 224, 224).astype(np.float32)

# Run inference
output = session.run(None, {input_name: input_data})
  1. Checking ONNX model validity:
import onnx

# Load the ONNX model
model = onnx.load("resnet18.onnx")

# Check the model's validity
onnx.checker.check_model(model)
print("The model is valid!")

Getting Started

To get started with ONNX:

  1. Install ONNX:

    pip install onnx
    
  2. Install ONNX Runtime for inference:

    pip install onnxruntime
    
  3. Convert your model to ONNX format using the appropriate framework-specific exporter (e.g., torch.onnx.export for PyTorch).

  4. Use ONNX Runtime to run inference on your ONNX model, as shown in the code examples above.

For more detailed information and advanced usage, refer to the ONNX documentation.

Competitor Comparisons

ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator

Pros of ONNX Runtime

  • Provides optimized inference engine for ONNX models
  • Supports a wider range of hardware accelerators and platforms
  • Offers better performance and scalability for production deployments

Cons of ONNX Runtime

  • Larger codebase and more complex to contribute to
  • Focuses on runtime execution rather than model definition and interchange
  • May have a steeper learning curve for beginners

Code Comparison

ONNX (model definition):

import onnx

node = onnx.helper.make_node(
    'Relu',
    inputs=['x'],
    outputs=['y'],
)

graph = onnx.helper.make_graph([node], ...)
model = onnx.helper.make_model(graph, ...)

ONNX Runtime (model inference):

import onnxruntime as ort

session = ort.InferenceSession("model.onnx")
input_name = session.get_inputs()[0].name
output_name = session.get_outputs()[0].name
result = session.run([output_name], {input_name: input_data})
185,446

An Open Source Machine Learning Framework for Everyone

Pros of TensorFlow

  • Comprehensive ecosystem with tools for deployment, visualization, and debugging
  • Strong support for distributed and parallel computing
  • Extensive documentation and large community support

Cons of TensorFlow

  • Steeper learning curve compared to ONNX
  • Less flexibility in model interoperability across frameworks
  • Larger file sizes for saved models

Code Comparison

ONNX model definition:

import onnx

node = onnx.helper.make_node('Relu', inputs=['x'], outputs=['y'])
graph = onnx.helper.make_graph([node], 'test', [input], [output])
model = onnx.helper.make_model(graph)

TensorFlow model definition:

import tensorflow as tf

model = tf.keras.Sequential([
    tf.keras.layers.Dense(64, activation='relu', input_shape=(784,)),
    tf.keras.layers.Dense(10, activation='softmax')
])

ONNX focuses on providing a common format for representing machine learning models, allowing for easier interoperability between different frameworks. TensorFlow, on the other hand, offers a complete ecosystem for building and deploying machine learning models, with more advanced features for large-scale and production environments. While ONNX excels in model portability, TensorFlow provides a more comprehensive solution for end-to-end machine learning workflows.

82,049

Tensors and Dynamic neural networks in Python with strong GPU acceleration

Pros of PyTorch

  • More comprehensive deep learning framework with built-in neural network modules
  • Dynamic computational graph allowing for flexible model architectures
  • Extensive community support and ecosystem of tools/libraries

Cons of PyTorch

  • Steeper learning curve for beginners compared to ONNX
  • Larger codebase and installation size
  • Less focus on model interoperability across frameworks

Code Comparison

ONNX (defining a simple model):

import onnx
from onnx import helper, TensorProto

node = helper.make_node("Add", ["X", "Y"], ["Z"])
graph = helper.make_graph([node], "simple_graph", [], [])
model = helper.make_model(graph)
onnx.save(model, "simple_model.onnx")

PyTorch (defining a simple model):

import torch
import torch.nn as nn

class SimpleModel(nn.Module):
    def __init__(self):
        super().__init__()
        self.linear = nn.Linear(10, 1)
    
    def forward(self, x):
        return self.linear(x)

model = SimpleModel()
11,580

Open deep learning compiler stack for cpu, gpu and specialized accelerators

Pros of TVM

  • Provides end-to-end optimization and deployment for deep learning models
  • Supports a wide range of hardware targets, including CPUs, GPUs, and specialized accelerators
  • Offers automatic tuning and optimization for specific hardware architectures

Cons of TVM

  • Steeper learning curve compared to ONNX due to its more complex architecture
  • Requires more setup and configuration for specific deployment scenarios
  • May have longer compilation times for complex models

Code Comparison

ONNX example (defining a simple model):

import onnx
from onnx import helper, TensorProto

X = helper.make_tensor_value_info('X', TensorProto.FLOAT, [1, 3, 224, 224])
Y = helper.make_tensor_value_info('Y', TensorProto.FLOAT, [1, 1000])
node_def = helper.make_node('Conv', ['X', 'W', 'B'], ['Y'], kernel_shape=[3, 3])
graph_def = helper.make_graph([node_def], 'test-model', [X], [Y])
model_def = helper.make_model(graph_def, producer_name='onnx-example')
onnx.save(model_def, 'conv_model.onnx')

TVM example (compiling and optimizing a model):

import tvm
from tvm import relay

shape_dict = {'input': (1, 3, 224, 224)}
mod, params = relay.frontend.from_onnx(onnx_model, shape_dict)
target = tvm.target.cuda()
with tvm.transform.PassContext(opt_level=3):
    lib = relay.build(mod, target, params=params)

Core ML tools contain supporting tools for Core ML model conversion, editing, and validation.

Pros of coremltools

  • Specifically designed for Apple platforms, offering seamless integration with iOS, macOS, and other Apple devices
  • Provides tools for converting models from various frameworks (TensorFlow, Keras, scikit-learn) to Core ML format
  • Includes features for model optimization and quantization tailored for Apple hardware

Cons of coremltools

  • Limited to Apple ecosystem, lacking cross-platform support unlike ONNX
  • Smaller community and ecosystem compared to ONNX, potentially resulting in fewer resources and third-party tools

Code Comparison

coremltools:

import coremltools as ct

model = ct.convert('model.h5', source='keras')
model.save('model.mlmodel')

ONNX:

import onnx
from keras2onnx import convert_keras

onnx_model = convert_keras(keras_model)
onnx.save_model(onnx_model, 'model.onnx')

Both examples show model conversion, but coremltools focuses on Apple's Core ML format, while ONNX provides a more universal approach for cross-platform compatibility.

29,761

Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more

Pros of JAX

  • Designed for high-performance numerical computing and machine learning
  • Supports automatic differentiation and GPU/TPU acceleration
  • Offers a more flexible and composable API for building complex models

Cons of JAX

  • Steeper learning curve compared to ONNX
  • Less widespread adoption in production environments
  • Limited support for deployment on edge devices

Code Comparison

ONNX example:

import onnx
model = onnx.load("model.onnx")
onnx.checker.check_model(model)

JAX example:

import jax.numpy as jnp
from jax import grad, jit

def f(x):
    return jnp.sum(jnp.sin(x))

grad_f = jit(grad(f))

ONNX focuses on providing a standardized format for representing machine learning models, while JAX offers a more comprehensive framework for numerical computing and machine learning. ONNX is better suited for model interoperability and deployment across different frameworks and platforms, whereas JAX excels in research and development of advanced machine learning algorithms, particularly those requiring custom gradients or complex transformations.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

PyPI - Version CI CII Best Practices OpenSSF Scorecard REUSE compliant Ruff Black

Open Neural Network Exchange (ONNX) is an open ecosystem that empowers AI developers to choose the right tools as their project evolves. ONNX provides an open source format for AI models, both deep learning and traditional ML. It defines an extensible computation graph model, as well as definitions of built-in operators and standard data types. Currently we focus on the capabilities needed for inferencing (scoring).

ONNX is widely supported and can be found in many frameworks, tools, and hardware. Enabling interoperability between different frameworks and streamlining the path from research to production helps increase the speed of innovation in the AI community. We invite the community to join us and further evolve ONNX.

Use ONNX

Learn about the ONNX spec

Programming utilities for working with ONNX Graphs

Contribute

ONNX is a community project and the open governance model is described here. We encourage you to join the effort and contribute feedback, ideas, and code. You can participate in the Special Interest Groups and Working Groups to shape the future of ONNX.

Check out our contribution guide to get started.

If you think some operator should be added to ONNX specification, please read this document.

Community meetings

The schedules of the regular meetings of the Steering Committee, the working groups and the SIGs can be found here

Community Meetups are held at least once a year. Content from previous community meetups are at:

Discuss

We encourage you to open Issues, or use Slack (If you have not joined yet, please use this link to join the group) for more real-time discussion.

Follow Us

Stay up to date with the latest ONNX news. [Facebook] [Twitter]

Roadmap

A roadmap process takes place every year. More details can be found here

Installation

Official Python packages

ONNX released packages are published in PyPi.

pip install onnx  # or pip install onnx[reference] for optional reference implementation dependencies

ONNX weekly packages are published in PyPI to enable experimentation and early testing.

vcpkg packages

onnx is in the maintenance list of vcpkg, you can easily use vcpkg to build and install it.

git clone https://github.com/microsoft/vcpkg.git
cd vcpkg
./bootstrap-vcpkg.bat # For powershell
./bootstrap-vcpkg.sh # For bash
./vcpkg install onnx

Conda packages

A binary build of ONNX is available from Conda, in conda-forge:

conda install -c conda-forge onnx

Build ONNX from Source

Before building from source uninstall any existing versions of onnx pip uninstall onnx.

c++17 or higher C++ compiler version is required to build ONNX from source. Still, users can specify their own CMAKE_CXX_STANDARD version for building ONNX.

If you don't have protobuf installed, ONNX will internally download and build protobuf for ONNX build.

Or, you can manually install protobuf C/C++ libraries and tools with specified version before proceeding forward. Then depending on how you installed protobuf, you need to set environment variable CMAKE_ARGS to "-DONNX_USE_PROTOBUF_SHARED_LIBS=ON" or "-DONNX_USE_PROTOBUF_SHARED_LIBS=OFF". For example, you may need to run the following command:

Linux:

export CMAKE_ARGS="-DONNX_USE_PROTOBUF_SHARED_LIBS=ON"

Windows:

set CMAKE_ARGS="-DONNX_USE_PROTOBUF_SHARED_LIBS=ON"

The ON/OFF depends on what kind of protobuf library you have. Shared libraries are files ending with *.dll/*.so/*.dylib. Static libraries are files ending with *.a/*.lib. This option depends on how you get your protobuf library and how it was built. And it is default OFF. You don't need to run the commands above if you'd prefer to use a static protobuf library.

Windows

If you are building ONNX from source, it is recommended that you also build Protobuf locally as a static library. The version distributed with conda-forge is a DLL, but ONNX expects it to be a static library. Building protobuf locally also lets you control the version of protobuf. The tested and recommended version is 3.21.12.

The instructions in this README assume you are using Visual Studio. It is recommended that you run all the commands from a shell started from "x64 Native Tools Command Prompt for VS 2019" and keep the build system generator for cmake (e.g., cmake -G "Visual Studio 16 2019") consistent while building protobuf as well as ONNX.

You can get protobuf by running the following commands:

git clone https://github.com/protocolbuffers/protobuf.git
cd protobuf
git checkout v21.12
cd cmake
cmake -G "Visual Studio 16 2019" -A x64 -DCMAKE_INSTALL_PREFIX=<protobuf_install_dir> -Dprotobuf_MSVC_STATIC_RUNTIME=OFF -Dprotobuf_BUILD_SHARED_LIBS=OFF -Dprotobuf_BUILD_TESTS=OFF -Dprotobuf_BUILD_EXAMPLES=OFF .
msbuild protobuf.sln /m /p:Configuration=Release
msbuild INSTALL.vcxproj /p:Configuration=Release

Then it will be built as a static library and installed to <protobuf_install_dir>. Please add the bin directory(which contains protoc.exe) to your PATH.

set CMAKE_PREFIX_PATH=<protobuf_install_dir>;%CMAKE_PREFIX_PATH%

Please note: if your protobuf_install_dir contains spaces, do not add quotation marks around it.

Alternative: if you don't want to change your PATH, you can set ONNX_PROTOC_EXECUTABLE instead.

set CMAKE_ARGS=-DONNX_PROTOC_EXECUTABLE=<full_path_to_protoc.exe>

Then you can build ONNX as:

git clone https://github.com/onnx/onnx.git
cd onnx
git submodule update --init --recursive
# prefer lite proto
set CMAKE_ARGS=-DONNX_USE_LITE_PROTO=ON
pip install -e . -v

Linux

First, you need to install protobuf. The minimum Protobuf compiler (protoc) version required by ONNX is 3.6.1. Please note that old protoc versions might not work with CMAKE_ARGS=-DONNX_USE_LITE_PROTO=ON.

Ubuntu 20.04 (and newer) users may choose to install protobuf via

apt-get install python3-pip python3-dev libprotobuf-dev protobuf-compiler

In this case, it is required to add -DONNX_USE_PROTOBUF_SHARED_LIBS=ON to CMAKE_ARGS in the ONNX build step.

A more general way is to build and install it from source. See the instructions below for more details.

Installing Protobuf from source

Debian/Ubuntu:

  git clone https://github.com/protocolbuffers/protobuf.git
  cd protobuf
  git checkout v21.12
  git submodule update --init --recursive
  mkdir build_source && cd build_source
  cmake ../cmake -Dprotobuf_BUILD_SHARED_LIBS=OFF -DCMAKE_INSTALL_PREFIX=/usr -DCMAKE_INSTALL_SYSCONFDIR=/etc -DCMAKE_POSITION_INDEPENDENT_CODE=ON -Dprotobuf_BUILD_TESTS=OFF -DCMAKE_BUILD_TYPE=Release
  make -j$(nproc)
  make install

CentOS/RHEL/Fedora:

  git clone https://github.com/protocolbuffers/protobuf.git
  cd protobuf
  git checkout v21.12
  git submodule update --init --recursive
  mkdir build_source && cd build_source
  cmake ../cmake  -DCMAKE_INSTALL_LIBDIR=lib64 -Dprotobuf_BUILD_SHARED_LIBS=OFF -DCMAKE_INSTALL_PREFIX=/usr -DCMAKE_INSTALL_SYSCONFDIR=/etc -DCMAKE_POSITION_INDEPENDENT_CODE=ON -Dprotobuf_BUILD_TESTS=OFF -DCMAKE_BUILD_TYPE=Release
  make -j$(nproc)
  make install

Here "-DCMAKE_POSITION_INDEPENDENT_CODE=ON" is crucial. By default static libraries are built without "-fPIC" flag, they are not position independent code. But shared libraries must be position independent code. Python C/C++ extensions(like ONNX) are shared libraries. So if a static library was not built with "-fPIC", it can't be linked to such a shared library.

Once build is successful, update PATH to include protobuf paths.

Then you can build ONNX as:

git clone https://github.com/onnx/onnx.git
cd onnx
git submodule update --init --recursive
# Optional: prefer lite proto
export CMAKE_ARGS=-DONNX_USE_LITE_PROTO=ON
pip install -e . -v

Mac

export NUM_CORES=`sysctl -n hw.ncpu`
brew update
brew install autoconf && brew install automake
wget https://github.com/protocolbuffers/protobuf/releases/download/v21.12/protobuf-cpp-3.21.12.tar.gz
tar -xvf protobuf-cpp-3.21.12.tar.gz
cd protobuf-3.21.12
mkdir build_source && cd build_source
cmake ../cmake -Dprotobuf_BUILD_SHARED_LIBS=OFF -DCMAKE_POSITION_INDEPENDENT_CODE=ON -Dprotobuf_BUILD_TESTS=OFF -DCMAKE_BUILD_TYPE=Release
make -j${NUM_CORES}
make install

Once build is successful, update PATH to include protobuf paths.

Then you can build ONNX as:

git clone --recursive https://github.com/onnx/onnx.git
cd onnx
# Optional: prefer lite proto
set CMAKE_ARGS=-DONNX_USE_LITE_PROTO=ON
pip install -e . -v

Verify Installation

After installation, run

python -c "import onnx"

to verify it works.

Common Build Options

For full list refer to CMakeLists.txt

Environment variables

  • USE_MSVC_STATIC_RUNTIME should be 1 or 0, not ON or OFF. When set to 1 onnx links statically to runtime library. Default: USE_MSVC_STATIC_RUNTIME=0

  • DEBUG should be 0 or 1. When set to 1 onnx is built in debug mode. or debug versions of the dependencies, you need to open the CMakeLists file and append a letter d at the end of the package name lines. For example, NAMES protobuf-lite would become NAMES protobuf-lited. Default: Debug=0

CMake variables

  • ONNX_USE_PROTOBUF_SHARED_LIBS should be ON or OFF. Default: ONNX_USE_PROTOBUF_SHARED_LIBS=OFF USE_MSVC_STATIC_RUNTIME=0 ONNX_USE_PROTOBUF_SHARED_LIBS determines how onnx links to protobuf libraries.

    • When set to ON - onnx will dynamically link to protobuf shared libs, PROTOBUF_USE_DLLS will be defined as described here, Protobuf_USE_STATIC_LIBS will be set to OFF and USE_MSVC_STATIC_RUNTIME must be 0.
    • When set to OFF - onnx will link statically to protobuf, and Protobuf_USE_STATIC_LIBS will be set to ON (to force the use of the static libraries) and USE_MSVC_STATIC_RUNTIME can be 0 or 1.
  • ONNX_USE_LITE_PROTO should be ON or OFF. When set to ON onnx uses lite protobuf instead of full protobuf. Default: ONNX_USE_LITE_PROTO=OFF

  • ONNX_WERROR should be ON or OFF. When set to ON warnings are treated as errors. Default: ONNX_WERROR=OFF in local builds, ON in CI and release pipelines.

Common Errors

  • Note: the import onnx command does not work from the source checkout directory; in this case you'll see ModuleNotFoundError: No module named 'onnx.onnx_cpp2py_export'. Change into another directory to fix this error.

  • If you run into any issues while building Protobuf as a static library, please ensure that shared Protobuf libraries, like libprotobuf, are not installed on your device or in the conda environment. If these shared libraries exist, either remove them to build Protobuf from source as a static library, or skip the Protobuf build from source to use the shared version directly.

  • If you run into any issues while building ONNX from source, and your error message reads, Could not find pythonXX.lib, ensure that you have consistent Python versions for common commands, such as python and pip. Clean all existing build files and rebuild ONNX again.

Testing

ONNX uses pytest as test driver. In order to run tests, you will first need to install pytest:

pip install pytest nbval

After installing pytest, use the following command to run tests.

pytest

Development

Check out the contributor guide for instructions.

License

Apache License v2.0

Code of Conduct

ONNX Open Source Code of Conduct