Top Related Projects
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
An Open Source Machine Learning Framework for Everyone
Tensors and Dynamic neural networks in Python with strong GPU acceleration
Open deep learning compiler stack for cpu, gpu and specialized accelerators
Core ML tools contain supporting tools for Core ML model conversion, editing, and validation.
Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
Quick Overview
ONNX (Open Neural Network Exchange) is an open format to represent machine learning models. It defines a common set of operators and a common file format to enable AI developers to use models with a variety of frameworks, tools, runtimes, and compilers.
Pros
- Interoperability: Allows models to be easily transferred between different frameworks
- Ecosystem support: Widely adopted by major ML frameworks and hardware vendors
- Extensibility: Supports custom operators and metadata for specialized use cases
- Performance optimization: Enables hardware-specific optimizations across different platforms
Cons
- Learning curve: Can be complex for beginners due to its comprehensive nature
- Version compatibility: Different versions of ONNX may have compatibility issues
- Limited support for some advanced model architectures: Certain cutting-edge architectures may not be fully supported
- Overhead: Converting models to ONNX format may introduce some performance overhead
Code Examples
- Converting a PyTorch model to ONNX:
import torch
import torchvision
# Load a pretrained ResNet model
model = torchvision.models.resnet18(pretrained=True)
model.eval()
# Create a dummy input
dummy_input = torch.randn(1, 3, 224, 224)
# Export the model to ONNX
torch.onnx.export(model, dummy_input, "resnet18.onnx", verbose=True)
- Loading and running an ONNX model:
import onnxruntime
import numpy as np
# Load the ONNX model
session = onnxruntime.InferenceSession("resnet18.onnx")
# Prepare input data
input_name = session.get_inputs()[0].name
input_data = np.random.randn(1, 3, 224, 224).astype(np.float32)
# Run inference
output = session.run(None, {input_name: input_data})
- Checking ONNX model validity:
import onnx
# Load the ONNX model
model = onnx.load("resnet18.onnx")
# Check the model's validity
onnx.checker.check_model(model)
print("The model is valid!")
Getting Started
To get started with ONNX:
-
Install ONNX:
pip install onnx
-
Install ONNX Runtime for inference:
pip install onnxruntime
-
Convert your model to ONNX format using the appropriate framework-specific exporter (e.g.,
torch.onnx.export
for PyTorch). -
Use ONNX Runtime to run inference on your ONNX model, as shown in the code examples above.
For more detailed information and advanced usage, refer to the ONNX documentation.
Competitor Comparisons
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
Pros of ONNX Runtime
- Provides optimized inference engine for ONNX models
- Supports a wider range of hardware accelerators and platforms
- Offers better performance and scalability for production deployments
Cons of ONNX Runtime
- Larger codebase and more complex to contribute to
- Focuses on runtime execution rather than model definition and interchange
- May have a steeper learning curve for beginners
Code Comparison
ONNX (model definition):
import onnx
node = onnx.helper.make_node(
'Relu',
inputs=['x'],
outputs=['y'],
)
graph = onnx.helper.make_graph([node], ...)
model = onnx.helper.make_model(graph, ...)
ONNX Runtime (model inference):
import onnxruntime as ort
session = ort.InferenceSession("model.onnx")
input_name = session.get_inputs()[0].name
output_name = session.get_outputs()[0].name
result = session.run([output_name], {input_name: input_data})
An Open Source Machine Learning Framework for Everyone
Pros of TensorFlow
- Comprehensive ecosystem with tools for deployment, visualization, and debugging
- Strong support for distributed and parallel computing
- Extensive documentation and large community support
Cons of TensorFlow
- Steeper learning curve compared to ONNX
- Less flexibility in model interoperability across frameworks
- Larger file sizes for saved models
Code Comparison
ONNX model definition:
import onnx
node = onnx.helper.make_node('Relu', inputs=['x'], outputs=['y'])
graph = onnx.helper.make_graph([node], 'test', [input], [output])
model = onnx.helper.make_model(graph)
TensorFlow model definition:
import tensorflow as tf
model = tf.keras.Sequential([
tf.keras.layers.Dense(64, activation='relu', input_shape=(784,)),
tf.keras.layers.Dense(10, activation='softmax')
])
ONNX focuses on providing a common format for representing machine learning models, allowing for easier interoperability between different frameworks. TensorFlow, on the other hand, offers a complete ecosystem for building and deploying machine learning models, with more advanced features for large-scale and production environments. While ONNX excels in model portability, TensorFlow provides a more comprehensive solution for end-to-end machine learning workflows.
Tensors and Dynamic neural networks in Python with strong GPU acceleration
Pros of PyTorch
- More comprehensive deep learning framework with built-in neural network modules
- Dynamic computational graph allowing for flexible model architectures
- Extensive community support and ecosystem of tools/libraries
Cons of PyTorch
- Steeper learning curve for beginners compared to ONNX
- Larger codebase and installation size
- Less focus on model interoperability across frameworks
Code Comparison
ONNX (defining a simple model):
import onnx
from onnx import helper, TensorProto
node = helper.make_node("Add", ["X", "Y"], ["Z"])
graph = helper.make_graph([node], "simple_graph", [], [])
model = helper.make_model(graph)
onnx.save(model, "simple_model.onnx")
PyTorch (defining a simple model):
import torch
import torch.nn as nn
class SimpleModel(nn.Module):
def __init__(self):
super().__init__()
self.linear = nn.Linear(10, 1)
def forward(self, x):
return self.linear(x)
model = SimpleModel()
Open deep learning compiler stack for cpu, gpu and specialized accelerators
Pros of TVM
- Provides end-to-end optimization and deployment for deep learning models
- Supports a wide range of hardware targets, including CPUs, GPUs, and specialized accelerators
- Offers automatic tuning and optimization for specific hardware architectures
Cons of TVM
- Steeper learning curve compared to ONNX due to its more complex architecture
- Requires more setup and configuration for specific deployment scenarios
- May have longer compilation times for complex models
Code Comparison
ONNX example (defining a simple model):
import onnx
from onnx import helper, TensorProto
X = helper.make_tensor_value_info('X', TensorProto.FLOAT, [1, 3, 224, 224])
Y = helper.make_tensor_value_info('Y', TensorProto.FLOAT, [1, 1000])
node_def = helper.make_node('Conv', ['X', 'W', 'B'], ['Y'], kernel_shape=[3, 3])
graph_def = helper.make_graph([node_def], 'test-model', [X], [Y])
model_def = helper.make_model(graph_def, producer_name='onnx-example')
onnx.save(model_def, 'conv_model.onnx')
TVM example (compiling and optimizing a model):
import tvm
from tvm import relay
shape_dict = {'input': (1, 3, 224, 224)}
mod, params = relay.frontend.from_onnx(onnx_model, shape_dict)
target = tvm.target.cuda()
with tvm.transform.PassContext(opt_level=3):
lib = relay.build(mod, target, params=params)
Core ML tools contain supporting tools for Core ML model conversion, editing, and validation.
Pros of coremltools
- Specifically designed for Apple platforms, offering seamless integration with iOS, macOS, and other Apple devices
- Provides tools for converting models from various frameworks (TensorFlow, Keras, scikit-learn) to Core ML format
- Includes features for model optimization and quantization tailored for Apple hardware
Cons of coremltools
- Limited to Apple ecosystem, lacking cross-platform support unlike ONNX
- Smaller community and ecosystem compared to ONNX, potentially resulting in fewer resources and third-party tools
Code Comparison
coremltools:
import coremltools as ct
model = ct.convert('model.h5', source='keras')
model.save('model.mlmodel')
ONNX:
import onnx
from keras2onnx import convert_keras
onnx_model = convert_keras(keras_model)
onnx.save_model(onnx_model, 'model.onnx')
Both examples show model conversion, but coremltools focuses on Apple's Core ML format, while ONNX provides a more universal approach for cross-platform compatibility.
Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
Pros of JAX
- Designed for high-performance numerical computing and machine learning
- Supports automatic differentiation and GPU/TPU acceleration
- Offers a more flexible and customizable approach to building ML models
Cons of JAX
- Steeper learning curve compared to ONNX
- Less widespread adoption in production environments
- Limited built-in support for traditional deep learning architectures
Code Comparison
ONNX example:
import onnx
model = onnx.load("model.onnx")
onnx.checker.check_model(model)
JAX example:
import jax.numpy as jnp
from jax import grad, jit
def f(x):
return jnp.sum(jnp.sin(x))
grad_f = jit(grad(f))
ONNX focuses on providing a standardized format for representing machine learning models, while JAX offers a more flexible framework for numerical computing and gradient-based optimization. ONNX is better suited for model interoperability and deployment, whereas JAX excels in research and custom algorithm development.
Convert
designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
Open Neural Network Exchange (ONNX) is an open ecosystem that empowers AI developers to choose the right tools as their project evolves. ONNX provides an open source format for AI models, both deep learning and traditional ML. It defines an extensible computation graph model, as well as definitions of built-in operators and standard data types. Currently we focus on the capabilities needed for inferencing (scoring).
ONNX is widely supported and can be found in many frameworks, tools, and hardware. Enabling interoperability between different frameworks and streamlining the path from research to production helps increase the speed of innovation in the AI community. We invite the community to join us and further evolve ONNX.
Use ONNX
Learn about the ONNX spec
- Overview
- ONNX intermediate representation spec
- Versioning principles of the spec
- Operators documentation
- Operators documentation (latest release)
- Python API Overview
Programming utilities for working with ONNX Graphs
Contribute
ONNX is a community project and the open governance model is described here. We encourage you to join the effort and contribute feedback, ideas, and code. You can participate in the Special Interest Groups and Working Groups to shape the future of ONNX.
Check out our contribution guide to get started.
If you think some operator should be added to ONNX specification, please read this document.
Community meetings
The schedules of the regular meetings of the Steering Committee, the working groups and the SIGs can be found here
Community Meetups are held at least once a year. Content from previous community meetups are at:
- 2020.04.09 https://wiki.lfaidata.foundation/display/DL/LF+AI+Day+-ONNX+Community+Virtual+Meetup+-+Silicon+Valley+-+April+9
- 2020.10.14 https://wiki.lfaidata.foundation/display/DL/LF+AI+Day+-+ONNX+Community+Workshop+-+October+14
- 2021.03.24 https://wiki.lfaidata.foundation/pages/viewpage.action?pageId=35160391
- 2021.10.21 https://wiki.lfaidata.foundation/pages/viewpage.action?pageId=46989689
- 2022.06.24 https://wiki.lfaidata.foundation/display/DL/ONNX+Community+Day+-+June+24
- 2023.06.28 https://wiki.lfaidata.foundation/display/DL/ONNX+Community+Day+2023+-+June+28
Discuss
We encourage you to open Issues, or use Slack (If you have not joined yet, please use this link to join the group) for more real-time discussion.
Follow Us
Stay up to date with the latest ONNX news. [Facebook] [Twitter]
Roadmap
A roadmap process takes place every year. More details can be found here
Installation
ONNX released packages are published in PyPi.
pip install onnx # or pip install onnx[reference] for optional reference implementation dependencies
ONNX weekly packages are published in PyPI to enable experimentation and early testing.
Detailed install instructions, including Common Build Options and Common Errors can be found here
Testing
ONNX uses pytest as test driver. In order to run tests, you will first need to install pytest
:
pip install pytest nbval
After installing pytest, use the following command to run tests.
pytest
Development
Check out the contributor guide for instructions.
License
Code of Conduct
Top Related Projects
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
An Open Source Machine Learning Framework for Everyone
Tensors and Dynamic neural networks in Python with strong GPU acceleration
Open deep learning compiler stack for cpu, gpu and specialized accelerators
Core ML tools contain supporting tools for Core ML model conversion, editing, and validation.
Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
Convert
designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot