Top Related Projects
Seamless operability between C++11 and Python
The most widely used Python to C compiler
NumPy & SciPy for GPU
Productive, portable, and performant GPU programming in Python.
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
Quick Overview
Numba is a Just-In-Time (JIT) compiler for Python that translates Python and NumPy code into fast machine code. It is designed to improve the performance of numerical and scientific Python applications, particularly those using NumPy arrays and functions.
Pros
- Significant performance improvements for numerical Python code
- Easy to use with minimal code changes required
- Supports both CPU and GPU acceleration
- Integrates well with the NumPy ecosystem
Cons
- Limited support for Python's full feature set
- May not provide significant speedups for non-numerical code
- Learning curve for advanced usage and custom optimizations
- Compilation overhead can impact performance for short-running functions
Code Examples
- Basic function compilation:
from numba import jit
import numpy as np
@jit(nopython=True)
def sum_of_squares(arr):
result = 0.0
for x in arr:
result += x * x
return result
arr = np.arange(1000000)
print(sum_of_squares(arr))
- Parallel processing with Numba:
from numba import jit, prange
import numpy as np
@jit(nopython=True, parallel=True)
def parallel_sum(arr):
result = 0.0
for i in prange(len(arr)):
result += arr[i]
return result
arr = np.random.rand(1000000)
print(parallel_sum(arr))
- CUDA GPU acceleration:
from numba import cuda
import numpy as np
@cuda.jit
def increment_by_one(arr):
i = cuda.grid(1)
if i < arr.size:
arr[i] += 1
arr = np.arange(1000000)
d_arr = cuda.to_device(arr)
increment_by_one[1024, 1024](d_arr)
result = d_arr.copy_to_host()
print(result[:10])
Getting Started
To get started with Numba:
-
Install Numba:
pip install numba
-
Import Numba in your Python script:
from numba import jit
-
Decorate your function with
@jit
:@jit(nopython=True) def my_function(x, y): return x + y
-
Call your function as usual:
result = my_function(10, 20) print(result)
Numba will automatically compile your function for improved performance.
Competitor Comparisons
Seamless operability between C++11 and Python
Pros of pybind11
- Seamless integration with C++ libraries and existing codebases
- More flexible and powerful for complex C++ bindings
- Better support for C++ templates and advanced features
Cons of pybind11
- Requires C++ knowledge and compilation
- More setup and boilerplate code needed
- Steeper learning curve for Python-only developers
Code Comparison
pybind11:
#include <pybind11/pybind11.h>
int add(int i, int j) {
return i + j;
}
PYBIND11_MODULE(example, m) {
m.def("add", &add, "A function that adds two numbers");
}
Numba:
from numba import jit
@jit(nopython=True)
def add(i, j):
return i + j
Key Differences
- pybind11 is primarily for creating Python bindings for C++ code, while Numba focuses on JIT compilation of Python code
- pybind11 offers more control over low-level details and C++ integration, whereas Numba provides easier Python-centric optimization
- Numba is generally easier to use for Python developers, while pybind11 is more powerful for C++ integration scenarios
Both tools have their strengths and are suited for different use cases, with pybind11 excelling in C++ interoperability and Numba in Python code optimization.
The most widely used Python to C compiler
Pros of Cython
- More flexible and powerful, allowing for fine-grained control over C-level optimizations
- Better integration with external C libraries and existing C code
- Supports writing pure Python, pure C, and a mix of both within the same file
Cons of Cython
- Requires explicit type declarations and cython-specific syntax for optimal performance
- Compilation step needed, which can slow down development iterations
- Steeper learning curve compared to Numba's more Python-like approach
Code Comparison
Cython:
cdef int fibonacci(int n):
cdef int a = 0, b = 1, i
for i in range(n):
a, b = b, a + b
return a
Numba:
@jit(nopython=True)
def fibonacci(n):
a, b = 0, 1
for _ in range(n):
a, b = b, a + b
return a
Both Cython and Numba aim to improve Python performance, but they take different approaches. Cython provides a superset of Python with additional syntax for C-like performance, while Numba focuses on JIT compilation of pure Python code. Cython offers more control and C integration, while Numba provides easier adoption for existing Python code.
NumPy & SciPy for GPU
Pros of CuPy
- Designed specifically for CUDA GPU acceleration, offering better performance for large-scale array operations
- Provides a drop-in replacement for NumPy, making it easier to port existing code
- Supports a wider range of CUDA-specific features and libraries
Cons of CuPy
- Limited to CUDA-enabled GPUs, not as versatile as Numba for different hardware
- Requires separate installation of CUDA toolkit
- May have compatibility issues with some NumPy functions or libraries
Code Comparison
Numba:
from numba import jit
import numpy as np
@jit(nopython=True)
def sum_array(arr):
return np.sum(arr)
CuPy:
import cupy as cp
def sum_array(arr):
return cp.sum(arr)
Both Numba and CuPy aim to accelerate numerical computations in Python, but they take different approaches. Numba focuses on just-in-time compilation for both CPU and GPU, while CuPy specializes in GPU acceleration using CUDA. Numba offers more flexibility across hardware, while CuPy provides deeper integration with CUDA-specific features. The choice between them depends on the specific requirements of your project and the available hardware.
Productive, portable, and performant GPU programming in Python.
Pros of Taichi
- Supports multiple backends (CPU, GPU, CUDA) with a unified programming model
- Offers automatic differentiation and sparse computation
- Provides a more intuitive syntax for parallel computing
Cons of Taichi
- Smaller community and ecosystem compared to Numba
- Steeper learning curve for users familiar with NumPy-style programming
- Limited support for certain data types and operations
Code Comparison
Numba example:
from numba import jit
import numpy as np
@jit(nopython=True)
def sum_array(arr):
return np.sum(arr)
Taichi example:
import taichi as ti
ti.init()
@ti.kernel
def sum_array(arr: ti.template()) -> ti.f32:
return ti.sum(arr)
Both Numba and Taichi aim to accelerate Python code, but they take different approaches. Numba focuses on JIT compilation of NumPy-like code, while Taichi provides a more comprehensive framework for high-performance computing across various hardware backends. Taichi offers more advanced features like automatic differentiation and sparse computation, but Numba has a larger user base and better integration with the existing NumPy ecosystem.
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
Pros of ONNX Runtime
- Broader ecosystem support, compatible with multiple ML frameworks
- Optimized for production deployment and inference
- Supports hardware acceleration across various devices (CPU, GPU, etc.)
Cons of ONNX Runtime
- Steeper learning curve for beginners
- Less flexible for custom Python code optimization
- Primarily focused on inference, not training
Code Comparison
ONNX Runtime example:
import onnxruntime as ort
session = ort.InferenceSession("model.onnx")
input_name = session.get_inputs()[0].name
output = session.run(None, {input_name: input_data})
Numba example:
from numba import jit
@jit(nopython=True)
def optimized_function(x):
# Custom computation logic
return result
ONNX Runtime is designed for deploying and optimizing pre-trained models, while Numba focuses on accelerating Python functions. ONNX Runtime provides a standardized format for ML models, enabling interoperability between different frameworks. Numba, on the other hand, allows for more fine-grained optimization of Python code, particularly useful for numerical computations and custom algorithms.
Convert
designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
Numba
.. image:: https://badges.gitter.im/numba/numba.svg :target: https://gitter.im/numba/numba?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge :alt: Gitter
.. image:: https://img.shields.io/badge/discuss-on%20discourse-blue :target: https://numba.discourse.group/ :alt: Discourse
.. image:: https://zenodo.org/badge/3659275.svg :target: https://zenodo.org/badge/latestdoi/3659275 :alt: Zenodo DOI
.. image:: https://img.shields.io/pypi/v/numba.svg :target: https://pypi.python.org/pypi/numba/ :alt: PyPI
.. image:: https://dev.azure.com/numba/numba/_apis/build/status/numba.numba?branchName=main :target: https://dev.azure.com/numba/numba/_build/latest?definitionId=1?branchName=main :alt: Azure Pipelines
A Just-In-Time Compiler for Numerical Functions in Python #########################################################
Numba is an open source, NumPy-aware optimizing compiler for Python sponsored by Anaconda, Inc. It uses the LLVM compiler project to generate machine code from Python syntax.
Numba can compile a large subset of numerically-focused Python, including many NumPy functions. Additionally, Numba has support for automatic parallelization of loops, generation of GPU-accelerated code, and creation of ufuncs and C callbacks.
For more information about Numba, see the Numba homepage: https://numba.pydata.org and the online documentation: https://numba.readthedocs.io/en/stable/index.html
Installation
Please follow the instructions:
https://numba.readthedocs.io/en/stable/user/installing.html
Demo
Please have a look and the demo notebooks via the mybinder service:
https://mybinder.org/v2/gh/numba/numba-examples/master?filepath=notebooks
Contact
Numba has a discourse forum for discussions:
Top Related Projects
Seamless operability between C++11 and Python
The most widely used Python to C compiler
NumPy & SciPy for GPU
Productive, portable, and performant GPU programming in Python.
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
Convert
designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot