QNNPACK
Quantized Neural Network PACKage - mobile-optimized implementation of quantized neural network operators
Top Related Projects
High-efficiency floating-point neural network inference operators for mobile, server, and Web
An Open Source Machine Learning Framework for Everyone
MNN is a blazing fast, lightweight deep learning framework, battle-tested by business-critical use cases in Alibaba
ncnn is a high-performance neural network inference framework optimized for the mobile platform
MACE is a deep learning inference framework optimized for mobile heterogeneous computing platforms.
TNN: developed by Tencent Youtu Lab and Guangying Lab, a uniform deep learning inference framework for mobile、desktop and server. TNN is distinguished by several outstanding features, including its cross-platform capability, high performance, model compression and code pruning. Based on ncnn and Rapidnet, TNN further strengthens the support and performance optimization for mobile devices, and also draws on the advantages of good extensibility and high performance from existed open source efforts. TNN has been deployed in multiple Apps from Tencent, such as Mobile QQ, Weishi, Pitu, etc. Contributions are welcome to work in collaborative with us and make TNN a better framework.
Quick Overview
QNNPACK (Quantized Neural Network PACKage) is a mobile-optimized library for low-precision neural network inference. It is designed to accelerate the execution of quantized neural networks on ARM mobile devices. QNNPACK is part of the PyTorch ecosystem and focuses on efficient implementation of quantized operators.
Pros
- Optimized for mobile devices, particularly ARM-based processors
- Supports various quantization schemes, including 8-bit quantization
- Integrates well with PyTorch's ecosystem
- Provides significant performance improvements for quantized neural networks
Cons
- Limited to specific hardware architectures (primarily ARM)
- Requires expertise in quantization techniques for optimal use
- May not be suitable for all types of neural networks or applications
- Documentation could be more comprehensive for newcomers
Code Examples
- Creating a quantized convolution operator:
qnnp_operator_t convolution;
qnnp_status status = qnnp_create_convolution2d_nhwc_q8(
padding_top, padding_right, padding_bottom, padding_left,
kernel_height, kernel_width,
stride_height, stride_width,
dilation_height, dilation_width,
groups, group_input_channels, group_output_channels,
input_zero_point, input_scale,
kernel_zero_point, kernel_scale,
kernel, bias,
output_zero_point, output_scale,
output_min, output_max,
0, &convolution);
- Running the quantized convolution:
qnnp_status status = qnnp_run_operator(convolution, setup_context);
- Initializing QNNPACK:
qnnp_status status = qnnp_initialize();
if (status != qnnp_status_success) {
// Handle initialization error
}
Getting Started
To use QNNPACK in your project:
-
Clone the repository:
git clone https://github.com/pytorch/QNNPACK.git
-
Build QNNPACK:
cd QNNPACK mkdir build && cd build cmake .. make
-
Include QNNPACK in your C++ project:
#include <qnnpack.h> // Initialize QNNPACK qnnp_initialize(); // Use QNNPACK operators in your code
Note: QNNPACK is typically used as part of the PyTorch ecosystem. For most users, it's recommended to use PyTorch's quantization features, which internally use QNNPACK on supported devices.
Competitor Comparisons
High-efficiency floating-point neural network inference operators for mobile, server, and Web
Pros of XNNPACK
- Broader platform support, including x86, ARM, and WebAssembly
- More extensive operator coverage, including convolutions and pooling
- Active development and frequent updates
Cons of XNNPACK
- Potentially more complex integration due to its broader scope
- May have a larger footprint for simple use cases
Code Comparison
QNNPACK (C++):
qnnp_status qnnp_initialize(void);
qnnp_status qnnp_deinitialize(void);
XNNPACK (C):
enum xnn_status xnn_initialize(const struct xnn_allocator* allocator);
enum xnn_status xnn_deinitialize(void);
Key Differences
- QNNPACK focuses on quantized neural network operations, while XNNPACK covers a wider range of neural network primitives.
- XNNPACK offers more flexibility in terms of supported architectures and platforms.
- QNNPACK is more tightly integrated with PyTorch, while XNNPACK is designed as a standalone library.
- XNNPACK provides more granular control over memory allocation through its initialization function.
Both libraries aim to optimize neural network operations on mobile and embedded devices, but XNNPACK has a broader scope and more active development. QNNPACK may be easier to use for PyTorch-specific projects, while XNNPACK offers more versatility for cross-platform development.
An Open Source Machine Learning Framework for Everyone
Pros of TensorFlow
- Larger ecosystem with more tools, libraries, and community support
- Better support for production deployment and serving models
- More comprehensive documentation and learning resources
Cons of TensorFlow
- Steeper learning curve and more complex API
- Slower development cycle and less flexibility for research
Code Comparison
QNNPACK (PyTorch):
qnnp_operator_t convolution_op = nullptr;
qnnp_create_convolution2d_nhwc_q8(
padding_top, padding_right, padding_bottom, padding_left,
kernel_height, kernel_width,
stride_height, stride_width,
dilation_height, dilation_width,
groups, group_input_channels, group_output_channels,
input_zero_point, kernel_zero_point, kernel,
bias, output_zero_point, output_min, output_max,
0, &convolution_op);
TensorFlow:
conv = tf.keras.layers.Conv2D(
filters=32,
kernel_size=(3, 3),
strides=(1, 1),
padding='valid',
activation='relu',
input_shape=(28, 28, 1)
)
QNNPACK is a low-level quantized neural network library, while TensorFlow provides a higher-level API for building and training neural networks. QNNPACK focuses on efficient inference on mobile and embedded devices, whereas TensorFlow offers a more comprehensive framework for both training and inference across various platforms.
MNN is a blazing fast, lightweight deep learning framework, battle-tested by business-critical use cases in Alibaba
Pros of MNN
- Broader platform support, including mobile and IoT devices
- More comprehensive feature set, including model conversion and optimization tools
- Active development and frequent updates
Cons of MNN
- Steeper learning curve due to more complex architecture
- Less integration with PyTorch ecosystem
- Potentially slower performance for specific use cases
Code Comparison
MNN example:
auto input = _Input({1, 3, 224, 224}, NC4HW4);
auto conv = _Conv(0.0f, 0.0f, input, {32, 3, 3, 3}, VALID);
auto output = _Convert(conv, NCHW);
QNNPACK example:
qnnp_operator_t convolution = nullptr;
qnnp_create_convolution2d_nhwc_q8(
padding_top, padding_right, padding_bottom, padding_left,
kernel_height, kernel_width,
stride_height, stride_width,
dilation_height, dilation_width,
groups, group_input_channels, group_output_channels,
input_zero_point, input_scale,
kernel_zero_point, kernel_scale,
kernel, bias,
output_zero_point, output_scale,
output_min, output_max,
0, &convolution);
Both libraries offer efficient implementations for neural network operations, but MNN provides a higher-level API and more extensive tooling, while QNNPACK focuses on low-level optimizations for quantized operations, particularly within the PyTorch ecosystem.
ncnn is a high-performance neural network inference framework optimized for the mobile platform
Pros of ncnn
- Broader platform support, including mobile and embedded devices
- More comprehensive set of pre-trained models and operators
- Easier to use for deployment in production environments
Cons of ncnn
- Less integration with PyTorch ecosystem
- May have slightly lower performance for some specific operations
- Documentation can be less detailed compared to QNNPACK
Code Comparison
ncnn example:
ncnn::Net net;
net.load_param("model.param");
net.load_model("model.bin");
ncnn::Mat in(224, 224, 3);
ncnn::Mat out;
net.input("data", in);
net.extract("output", out);
QNNPACK example:
torch::jit::script::Module module = torch::jit::load("model.pt");
module.eval();
torch::Tensor input = torch::rand({1, 3, 224, 224});
torch::Tensor output = module.forward({input}).toTensor();
Both libraries aim to optimize neural network computations, but ncnn focuses more on deployment across various platforms, while QNNPACK is specifically designed for quantized operations within the PyTorch framework.
MACE is a deep learning inference framework optimized for mobile heterogeneous computing platforms.
Pros of MACE
- Supports a wider range of mobile platforms, including Android, iOS, and Linux
- Provides a complete end-to-end solution for deploying deep learning models on mobile devices
- Offers built-in model conversion tools for various frameworks like TensorFlow and Caffe
Cons of MACE
- Less focused on quantization compared to QNNPACK
- May have a steeper learning curve due to its broader scope and features
- Potentially larger overhead for simple use cases
Code Comparison
MACE example (model deployment):
#include "mace/public/mace.h"
MaceEngine engine;
MaceStatus status = CreateMaceEngineFromProto(model_graph_proto,
model_graph_proto_size,
model_weights_data,
model_weights_data_size,
input_names,
output_names,
device_type,
&engine);
QNNPACK example (operator usage):
#include <qnnpack.h>
qnnp_initialize();
qnnp_operator_t convolution;
qnnp_create_convolution2d_nhwc_q8(
padding_top, padding_right, padding_bottom, padding_left,
kernel_height, kernel_width,
stride_height, stride_width,
dilation_height, dilation_width,
groups, group_input_channels, group_output_channels,
input_zero_point, input_scale,
kernel_zero_point, kernel_scale,
kernel, bias,
output_zero_point, output_scale,
output_min, output_max,
0, &convolution);
TNN: developed by Tencent Youtu Lab and Guangying Lab, a uniform deep learning inference framework for mobile、desktop and server. TNN is distinguished by several outstanding features, including its cross-platform capability, high performance, model compression and code pruning. Based on ncnn and Rapidnet, TNN further strengthens the support and performance optimization for mobile devices, and also draws on the advantages of good extensibility and high performance from existed open source efforts. TNN has been deployed in multiple Apps from Tencent, such as Mobile QQ, Weishi, Pitu, etc. Contributions are welcome to work in collaborative with us and make TNN a better framework.
Pros of TNN
- Broader platform support, including mobile and embedded devices
- More comprehensive model optimization techniques
- Extensive support for various neural network architectures
Cons of TNN
- Less integration with PyTorch ecosystem
- Potentially steeper learning curve for developers familiar with PyTorch
Code Comparison
QNNPACK example (C++):
qnnp_operator_t convolution;
qnnp_create_convolution2d_nhwc_q8(
padding_top, padding_right, padding_bottom, padding_left,
kernel_height, kernel_width,
stride_height, stride_width,
dilation_height, dilation_width,
groups, group_input_channels, group_output_channels,
input_zero_point, input_scale,
kernel_zero_point, kernel_scale,
kernel, bias,
output_zero_point, output_scale,
output_min, output_max,
flags,
&convolution);
TNN example (C++):
auto conv_layer = std::make_shared<ConvLayerAcc>();
conv_layer->Init(context_, param_, resource_, inputs, outputs);
conv_layer->Forward(inputs, outputs);
The code examples show that QNNPACK focuses on quantized operations with detailed parameter configuration, while TNN provides a higher-level abstraction for layer implementation and execution.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
QNNPACK
QNNPACK (Quantized Neural Networks PACKage) is a mobile-optimized library for low-precision high-performance neural network inference. QNNPACK provides implementation of common neural network operators on quantized 8-bit tensors.
QNNPACK is not intended to be directly used by machine learning researchers; instead it provides low-level performance primitives for high-level deep learning frameworks. As of today, QNNPACK is integrated in PyTorch 1.0 with Caffe2 graph representation.
Operator Coverage
Currently implemented and planned for implementation operators are below:
- 2D Convolution
- 2D Deconvolution
- Channel Shuffle
- Fully Connected
- Locally Connected
- 2D Max Pooling
- 2D Average Pooling
- Global Average Pooling
- Sigmoid
- Leaky ReLU
- Clamp (can be used for ReLU, ReLU6 if it is not fused in another operator)
- SoftArgMax (aka SoftMax)
- Group Normalization
Building
QNNPACK provides standard CMake-based build scripts.
Native compilation
Users are recommended to use scripts/build-local.sh
script to build QNNPACK for the host machine.
Cross-compilation for Android
To cross-compile for Android, set $ANDROID_NDK
environment variable (where $ANDROID_NDK
is the path to Android NDK directory, e.g. /opt/android-ndk-r15c
) and use one of the scripts from the table below:
ABI | Build script | Restrictions |
---|---|---|
armeabi-v7a | scripts/build-android-armv7.sh | Requires CPU with ARM NEON |
arm64-v8a | scripts/build-android-arm64.sh | |
x86 | scripts/build-android-x86.sh |
Notes:
- On armeabi-v7a
qnnp_initialize
will fail withqnnp_status_unsupported_hardware
if the mobile CPU does not support ARM NEON. Don't set-DANDROID_ARM_NEON=1
for QNNPACK compilation as it can makeqnnp_initialize
crash on CPUs without ARM NEON.
Cross-compilation for iOS
To cross-compile for iOS, clone ios-cmake, and set $IOS_CMAKE_TOOLCHAIN_FILE
environment variable (where $IOS_CMAKE_TOOLCHAIN_FILE
is the path to ios.toolchain.cmake
file in ios-cmake), and use one of the scripts from the table below:
Architecture | Build script | Notes |
---|---|---|
armv7 | scripts/build-ios-armv7.sh | iPhone 3GS/4/4S |
armv7 | scripts/build-ios-armv7s.sh | iPhone 5 and newer |
arm64 | scripts/build-ios-arm64.sh | iPhone 5S and newer |
arm64e | scripts/build-ios-arm64e.sh | iPhone XS/XR |
i386 | scripts/build-ios-i386.sh | iPhone Simulator (32-bit) |
x86_64 | scripts/build-ios-x86_64.sh | iPhone Simulator (64-bit) |
End-to-End Benchmarking
Caffe2 backend of PyTorch 1.0 natively integrates QNNPACK, and provides a pre-trained quantized MobileNet v2 model. Below are instructions for benchmarking this model end-to-end with QNNPACK.
Raspberry Pi 2 or 3
# Clone PyTorch 1.0 repo
git clone --recursive https://github.com/pytorch/pytorch.git
cd pytorch
# Optional: update QNNPACK submodule to latest revision
git submodule update --remote third_party/QNNPACK
# Build Caffe2 (including binaries) for the host system
# Use only 1 thread for build to avoid out-of-memory failures
MAX_JOBS=1 scripts/build_local.sh -DBUILD_BINARY=ON -DBUILD_PYTHON=OFF \
-DUSE_OBSERVERS=OFF -DUSE_DISTRIBUTED=OFF
# Download model weights
wget https://s3.amazonaws.com/download.caffe2.ai/models/mobilenet_v2_1.0_224_quant/init_net.pb
# Download model graph
wget https://s3.amazonaws.com/download.caffe2.ai/models/mobilenet_v2_1.0_224_quant/predict_net.pb
# Run speed benchmark with 50 warm-up iterations and 10 measurement iterations
build/bin/speed_benchmark --net predict_net.pb --init_net init_net.pb \
--input data --input_dims 1,3,224,224 --input_type float \
--warmup 50 --iter 10
ARMv7 (32-bit) Android
# Clone PyTorch 1.0 repo
git clone --recursive https://github.com/pytorch/pytorch.git
cd pytorch
# Optional: update QNNPACK submodule to latest revision
git submodule update --remote third_party/QNNPACK
# Build Caffe2 (including binaries) for Android, and push to device
scripts/build_android.sh -DANDROID_TOOLCHAIN=clang -DBUILD_BINARY=ON
adb push build_android/bin/speed_benchmark /data/local/tmp/speed_benchmark
# Download model weights and copy them to Android device
wget https://s3.amazonaws.com/download.caffe2.ai/models/mobilenet_v2_1.0_224_quant/init_net.pb
adb push init_net.pb /data/local/tmp/init_net.pb
# Download model graph and copy it to Android device
wget https://s3.amazonaws.com/download.caffe2.ai/models/mobilenet_v2_1.0_224_quant/predict_net.pb
adb push predict_net.pb /data/local/tmp/predict_net.pb
# Run speed benchmark with 50 warm-up iterations and 10 measurement iterations
adb shell /data/local/tmp/speed_benchmark \
--net /data/local/tmp/predict_net.pb \
--init_net /data/local/tmp/init_net.pb \
--input data --input_dims 1,3,224,224 --input_type float \
--warmup 50 --iter 10
ARM64 (64-bit) Android
# Clone PyTorch 1.0 repo
git clone --recursive https://github.com/pytorch/pytorch.git
cd pytorch
# Optional: update QNNPACK submodule to latest revision
git submodule update --remote third_party/QNNPACK
# Build Caffe2 (including binaries) for Android, and push to device
scripts/build_android.sh -DANDROID_ABI=arm64-v8a -DANDROID_TOOLCHAIN=clang -DBUILD_BINARY=ON
adb push build_android/bin/speed_benchmark /data/local/tmp/speed_benchmark
# Download model weights and copy them to Android device
wget https://s3.amazonaws.com/download.caffe2.ai/models/mobilenet_v2_1.0_224_quant/init_net.pb
adb push init_net.pb /data/local/tmp/init_net.pb
# Download model graph and copy it to Android device
wget https://s3.amazonaws.com/download.caffe2.ai/models/mobilenet_v2_1.0_224_quant/predict_net.pb
adb push predict_net.pb /data/local/tmp/predict_net.pb
# Run speed benchmark with 50 warm-up iterations and 10 measurement iterations
adb shell /data/local/tmp/speed_benchmark \
--net /data/local/tmp/predict_net.pb \
--init_net /data/local/tmp/init_net.pb \
--input data --input_dims 1,3,224,224 --input_type float \
--warmup 50 --iter 10
PEP (Performance Evaluation Platform) Method
Facebook AI Performance Evaluation Platform is a framework and backend agnostic benchmarking platform to compare machine learning inferencing runtime metrics on a set of models and a variety of backends.
We use PEP to produce the results we have in our blog
With an ARMv7 device connected:
# Clone PyTorch 1.0 repo
mkdir ~/Code && cd ~/Code
git clone --recursive https://github.com/pytorch/pytorch.git
cd pytorch
# Optional: update QNNPACK submodule to latest revision
git submodule update --remote third_party/QNNPACK
# Clone PEP repo
cd ~/Code
git clone --recursive https://github.com/facebook/FAI-PEP.git aibench
cd aibench
# Run PEP benchmark with cool specifications. Try changing that cmd with more specifications!
# First time compile could take 20+ minutes
./benchmarking/run_bench.py \
--platform android \
-b ~/Code/aibench/specifications/models/caffe2/mobilenet_v2/mobilenet_v2_quant.json \
--platform android --repo_dir ~/Code/pytorch \
--frameworks_dir ~/Code/aibench/specifications/frameworks --framework caffe2
Acknowledgements
QNNPACK is developed by Marat Dukhan, Yiming Wu, Hao Lu, and Bert Maher. We thank Andrew Tulloch and Yangqing Jia for advice during the development of QNNPACK.
License
QNNPACK is BSD licensed, as found in the LICENSE
file.
Top Related Projects
High-efficiency floating-point neural network inference operators for mobile, server, and Web
An Open Source Machine Learning Framework for Everyone
MNN is a blazing fast, lightweight deep learning framework, battle-tested by business-critical use cases in Alibaba
ncnn is a high-performance neural network inference framework optimized for the mobile platform
MACE is a deep learning inference framework optimized for mobile heterogeneous computing platforms.
TNN: developed by Tencent Youtu Lab and Guangying Lab, a uniform deep learning inference framework for mobile、desktop and server. TNN is distinguished by several outstanding features, including its cross-platform capability, high performance, model compression and code pruning. Based on ncnn and Rapidnet, TNN further strengthens the support and performance optimization for mobile devices, and also draws on the advantages of good extensibility and high performance from existed open source efforts. TNN has been deployed in multiple Apps from Tencent, such as Mobile QQ, Weishi, Pitu, etc. Contributions are welcome to work in collaborative with us and make TNN a better framework.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot