intel-extension-for-pytorch
A Python package for extending the official PyTorch that can easily obtain performance on Intel platform
Top Related Projects
Tensors and Dynamic neural networks in Python with strong GPU acceleration
An Open Source Machine Learning Framework for Everyone
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
Open deep learning compiler stack for cpu, gpu and specialized accelerators
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Quick Overview
Intel Extension for PyTorch is an open-source project that optimizes PyTorch for Intel hardware. It enhances performance on Intel CPUs and GPUs, providing seamless integration with existing PyTorch code. The extension includes optimized operators, graph optimizations, and hardware-specific features to accelerate deep learning workloads.
Pros
- Significant performance improvements for PyTorch on Intel hardware
- Easy integration with existing PyTorch code
- Supports both CPU and GPU optimizations
- Regular updates and active development from Intel
Cons
- Limited to Intel hardware, not beneficial for other architectures
- May require additional configuration or setup for optimal performance
- Some advanced features might have a learning curve
- Potential compatibility issues with certain PyTorch versions or custom operators
Code Examples
- Enabling Intel Extension for PyTorch:
import torch
import intel_extension_for_pytorch as ipex
# Optimize the model
model = model.to(memory_format=torch.channels_last)
model = ipex.optimize(model)
- Using Intel Extension for mixed precision training:
import intel_extension_for_pytorch as ipex
# Enable mixed precision
dtype = torch.bfloat16 if ipex.is_bf16_supported() else torch.float32
model, optimizer = ipex.optimize(model, optimizer=optimizer, dtype=dtype)
- Utilizing Intel Extension for distributed training:
import intel_extension_for_pytorch as ipex
import torch.distributed as dist
# Initialize distributed training
dist.init_process_group(backend='ccl')
model = ipex.optimize(model, optimizer=optimizer)
model = torch.nn.parallel.DistributedDataParallel(model)
Getting Started
To get started with Intel Extension for PyTorch:
- Install the extension:
pip install intel-extension-for-pytorch
- Import and use in your PyTorch code:
import torch
import intel_extension_for_pytorch as ipex
# Load your model
model = YourModel()
# Optimize the model
model = ipex.optimize(model)
# Continue with your regular PyTorch workflow
Competitor Comparisons
Tensors and Dynamic neural networks in Python with strong GPU acceleration
Pros of PyTorch
- Broader ecosystem and community support
- More extensive documentation and tutorials
- Wider range of pre-trained models and datasets
Cons of PyTorch
- Less optimized for Intel hardware
- May require additional configuration for optimal performance on Intel CPUs
- Potentially slower inference and training on Intel architectures
Code Comparison
PyTorch:
import torch
x = torch.randn(5, 3)
y = torch.randn(3, 2)
z = torch.mm(x, y)
Intel Extension for PyTorch:
import torch
import intel_extension_for_pytorch as ipex
x = torch.randn(5, 3)
y = torch.randn(3, 2)
z = ipex.mm(x, y)
The Intel Extension for PyTorch provides optimized implementations of PyTorch operations for Intel hardware. While the basic usage remains similar, the Intel extension offers performance improvements on Intel CPUs and GPUs. PyTorch provides a more general-purpose solution with broader compatibility, while the Intel extension focuses on optimizing performance for specific hardware.
An Open Source Machine Learning Framework for Everyone
Pros of TensorFlow
- Broader ecosystem and community support
- More extensive documentation and learning resources
- Better support for production deployment and serving models
Cons of TensorFlow
- Steeper learning curve for beginners
- Less intuitive API compared to PyTorch
- Slower development cycle and less flexibility in research settings
Code Comparison
TensorFlow:
import tensorflow as tf
model = tf.keras.Sequential([
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])
Intel Extension for PyTorch:
import torch
import intel_extension_for_pytorch as ipex
model = torch.nn.Sequential(
torch.nn.Linear(784, 64),
torch.nn.ReLU(),
torch.nn.Linear(64, 10)
)
model = ipex.optimize(model)
The Intel Extension for PyTorch focuses on optimizing PyTorch performance on Intel hardware, while TensorFlow is a more general-purpose deep learning framework. TensorFlow offers a wider range of features and tools, but the Intel Extension for PyTorch provides specific optimizations for Intel CPUs and GPUs, potentially offering better performance on compatible hardware.
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
Pros of ONNX Runtime
- Broader ecosystem support and compatibility with multiple frameworks
- More extensive optimization techniques for various hardware platforms
- Larger community and more frequent updates
Cons of ONNX Runtime
- Potentially more complex setup and integration process
- May require model conversion to ONNX format for optimal performance
Code Comparison
ONNX Runtime:
import onnxruntime as ort
session = ort.InferenceSession("model.onnx")
output = session.run(None, {"input": input_data})
Intel Extension for PyTorch:
import torch
import intel_extension_for_pytorch as ipex
model = model.to(memory_format=torch.channels_last)
model = ipex.optimize(model)
output = model(input_data)
The Intel Extension for PyTorch focuses on optimizing PyTorch models specifically for Intel hardware, while ONNX Runtime provides a more versatile runtime for various model formats and hardware platforms. The Intel extension integrates seamlessly with existing PyTorch code, whereas ONNX Runtime may require additional steps for model conversion and optimization.
NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
Pros of TensorRT
- Highly optimized for NVIDIA GPUs, offering superior performance for deep learning inference
- Supports a wide range of deep learning frameworks, including TensorFlow, PyTorch, and ONNX
- Provides advanced optimizations like layer fusion and precision calibration
Cons of TensorRT
- Limited to NVIDIA hardware, lacking support for other GPU or CPU architectures
- Steeper learning curve and more complex setup compared to Intel Extension for PyTorch
- May require model conversion and optimization, which can be time-consuming
Code Comparison
TensorRT:
import tensorrt as trt
builder = trt.Builder(TRT_LOGGER)
network = builder.create_network()
parser = trt.OnnxParser(network, TRT_LOGGER)
parser.parse_from_file(onnx_file)
engine = builder.build_cuda_engine(network)
Intel Extension for PyTorch:
import intel_extension_for_pytorch as ipex
model = torch.jit.load("model.pt")
model = ipex.optimize(model)
with torch.cpu.amp.autocast():
output = model(input)
Both repositories aim to optimize deep learning models for specific hardware, but TensorRT focuses on NVIDIA GPUs, while Intel Extension for PyTorch targets Intel CPUs and GPUs. TensorRT offers more advanced optimizations but requires more setup and is limited to NVIDIA hardware. Intel Extension for PyTorch provides an easier integration with existing PyTorch code and supports a broader range of Intel hardware.
Open deep learning compiler stack for cpu, gpu and specialized accelerators
Pros of TVM
- Broader hardware support, including GPUs, CPUs, and various AI accelerators
- More flexible and customizable compilation pipeline
- Active open-source community with frequent updates and contributions
Cons of TVM
- Steeper learning curve due to its more complex architecture
- May require more manual optimization for specific hardware
- Less specialized for Intel hardware compared to Intel Extension for PyTorch
Code Comparison
TVM:
import tvm
from tvm import relay
# Define a simple network
data = relay.var("data", relay.TensorType((1, 3, 224, 224), "float32"))
weight = relay.var("weight")
conv2d = relay.nn.conv2d(data, weight)
func = relay.Function([data, weight], conv2d)
# Compile the network
target = "llvm"
with tvm.transform.PassContext(opt_level=3):
lib = relay.build(func, target)
Intel Extension for PyTorch:
import torch
import intel_extension_for_pytorch as ipex
# Define a simple network
model = torch.nn.Conv2d(3, 64, kernel_size=3, padding=1)
model = ipex.optimize(model)
# Prepare input data
input_data = torch.randn(1, 3, 224, 224)
# Run inference
with torch.no_grad():
output = model(input_data)
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Pros of transformers
- Extensive library of pre-trained models for various NLP tasks
- Active community and frequent updates
- Comprehensive documentation and examples
Cons of transformers
- Larger library size and potential overhead
- May require more setup for specific hardware optimizations
Code comparison
transformers:
from transformers import BertModel, BertTokenizer
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertModel.from_pretrained('bert-base-uncased')
intel-extension-for-pytorch:
import intel_extension_for_pytorch as ipex
import torch
model = torch.jit.load('model.pt')
model = ipex.optimize(model)
Key differences
- transformers focuses on providing a wide range of pre-trained models and tools for NLP tasks
- intel-extension-for-pytorch is specifically designed to optimize PyTorch models for Intel hardware
- transformers offers higher-level abstractions for working with models, while intel-extension-for-pytorch provides lower-level optimizations
Use cases
- transformers: Ideal for rapid prototyping and experimentation with various NLP models
- intel-extension-for-pytorch: Best for optimizing PyTorch models on Intel CPUs and accelerators
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
Intel® Extension for PyTorch*
CPU ð»main branch | ð±Quick Start | ðDocumentations | ðInstallation | ð»LLM Example
GPU ð»main branch | ð±Quick Start | ðDocumentations | ðInstallation | ð»LLM Example
Intel® Extension for PyTorch* extends PyTorch* with up-to-date features optimizations for an extra performance boost on Intel hardware. Optimizations take advantage of Intel® Advanced Vector Extensions 512 (Intel® AVX-512) Vector Neural Network Instructions (VNNI) and Intel® Advanced Matrix Extensions (Intel® AMX) on Intel CPUs as well as Intel Xe Matrix Extensions (XMX) AI engines on Intel discrete GPUs. Moreover, Intel® Extension for PyTorch* provides easy GPU acceleration for Intel discrete GPUs through the PyTorch* xpu device.
ipex.llm - Large Language Models (LLMs) Optimization
In the current technological landscape, Generative AI (GenAI) workloads and models have gained widespread attention and popularity. Large Language Models (LLMs) have emerged as the dominant models driving these GenAI applications. Starting from 2.1.0, specific optimizations for certain LLM models are introduced in the Intel® Extension for PyTorch*. Check LLM optimizations for details.
Optimized Model List
MODEL FAMILY | MODEL NAME (Huggingface hub) | FP32 | BF16 | Static quantization INT8 | Weight only quantization INT8 | Weight only quantization INT4 |
---|---|---|---|---|---|---|
LLAMA | meta-llama/Llama-2-7b-hf | ð© | ð© | ð¨ | ð© | ð¨ |
LLAMA | meta-llama/Llama-2-13b-hf | ð© | ð© | ð© | ð© | ð© |
LLAMA | meta-llama/Llama-2-70b-hf | ð© | ð© | ð© | ð© | ð© |
LLAMA | meta-llama/Meta-Llama-3-8B | ð© | ð© | ð¨ | ð© | ð¨ |
LLAMA | meta-llama/Meta-Llama-3-70B | ð© | ð© | ð¨ | ð© | ð© |
LLAMA | meta-llama/Meta-Llama-3.1-8B-Instruct | ð© | ð© | ð¨ | ð© | ð© |
GPT-J | EleutherAI/gpt-j-6b | ð© | ð© | ð© | ð© | ð© |
GPT-NEOX | EleutherAI/gpt-neox-20b | ð© | ð¨ | ð¨ | ð© | ð¨ |
DOLLY | databricks/dolly-v2-12b | ð© | ð¨ | ð¨ | ð© | ð¨ |
FALCON | tiiuae/falcon-7b | ð© | ð© | ð© | ð© | |
FALCON | tiiuae/falcon-11b | ð© | ð© | ð© | ð© | ð¨ |
FALCON | tiiuae/falcon-40b | ð© | ð© | ð© | ð© | ð© |
OPT | facebook/opt-30b | ð© | ð© | ð© | ð© | ð¨ |
OPT | facebook/opt-1.3b | ð© | ð© | ð© | ð© | ð¨ |
Bloom | bigscience/bloom-1b7 | ð© | ð¨ | ð© | ð© | ð¨ |
CodeGen | Salesforce/codegen-2B-multi | ð© | ð© | ð© | ð© | ð© |
Baichuan | baichuan-inc/Baichuan2-7B-Chat | ð© | ð© | ð© | ð© | ð¨ |
Baichuan | baichuan-inc/Baichuan2-13B-Chat | ð© | ð© | ð¨ | ð© | ð¨ |
Baichuan | baichuan-inc/Baichuan-13B-Chat | ð© | ð¨ | ð© | ð© | ð¨ |
ChatGLM | THUDM/chatglm3-6b | ð© | ð© | ð¨ | ð© | ð¨ |
ChatGLM | THUDM/chatglm2-6b | ð© | ð© | ð© | ð© | ð¨ |
GPTBigCode | bigcode/starcoder | ð© | ð© | ð¨ | ð© | ð¨ |
T5 | google/flan-t5-xl | ð© | ð© | ð¨ | ð© | |
MPT | mosaicml/mpt-7b | ð© | ð© | ð© | ð© | ð© |
Mistral | mistralai/Mistral-7B-v0.1 | ð© | ð© | ð¨ | ð© | ð¨ |
Mixtral | mistralai/Mixtral-8x7B-v0.1 | ð© | ð© | ð© | ð¨ | |
Stablelm | stabilityai/stablelm-2-1_6b | ð© | ð© | ð¨ | ð© | ð¨ |
Qwen | Qwen/Qwen-7B-Chat | ð© | ð© | ð¨ | ð© | ð¨ |
Qwen | Qwen/Qwen2-7B | ð© | ð© | ð¨ | ð© | ð¨ |
LLaVA | liuhaotian/llava-v1.5-7b | ð© | ð© | ð© | ð© | |
GIT | microsoft/git-base | ð© | ð© | ð© | ||
Yuan | IEITYuan/Yuan2-102B-hf | ð© | ð© | ð¨ | ||
Phi | microsoft/phi-2 | ð© | ð© | ð© | ð© | ð¨ |
Phi | microsoft/Phi-3-mini-4k-instruct | ð© | ð© | ð¨ | ð© | ð¨ |
Phi | microsoft/Phi-3-mini-128k-instruct | ð© | ð© | ð¨ | ð© | ð¨ |
Phi | microsoft/Phi-3-medium-4k-instruct | ð© | ð© | ð¨ | ð© | ð¨ |
Phi | microsoft/Phi-3-medium-128k-instruct | ð© | ð© | ð¨ | ð© | ð¨ |
Whisper | openai/whisper-large-v2 | ð© | ð© | ð© | ð© |
-
ð© signifies that the model can perform well and with good accuracy (<1% difference as compared with FP32).
-
ð¨ signifies that the model can perform well while accuracy may not been in a perfect state (>1% difference as compared with FP32).
Note: The above verified models (including other models in the same model family, like "codellama/CodeLlama-7b-hf" from LLAMA family) are well supported with all optimizations like indirect access KV cache, fused ROPE, and customized linear kernels. We are working in progress to better support the models in the tables with various data types. In addition, more models will be optimized in the future.
In addition, Intel® Extension for PyTorch* introduces module level optimization APIs (prototype feature) since release 2.3.0. The feature provides optimized alternatives for several commonly used LLM modules and functionalities for the optimizations of the niche or customized LLMs. Please read LLM module level optimization practice to better understand how to optimize your own LLM and achieve better performance.
Support
The team tracks bugs and enhancement requests using GitHub issues. Before submitting a suggestion or bug report, search the existing GitHub issues to see if your issue has already been reported.
License
Apache License, Version 2.0. As found in LICENSE file.
Security
See Intel's Security Center for information on how to report a potential security issue or vulnerability.
See also: Security Policy
Top Related Projects
Tensors and Dynamic neural networks in Python with strong GPU acceleration
An Open Source Machine Learning Framework for Everyone
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
Open deep learning compiler stack for cpu, gpu and specialized accelerators
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot