Convert Figma logo to code with AI

jax-ml logojax

Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more

29,959
2,747
29,959
1,735

Top Related Projects

5,936

Flax is a neural network library for JAX that is designed for flexibility.

82,049

Tensors and Dynamic neural networks in Python with strong GPU acceleration

185,446

An Open Source Machine Learning Framework for Everyone

34,658

DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.

14,147

Distributed training framework for TensorFlow, Keras, PyTorch, and Apache MXNet.

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.

Quick Overview

JAX is a high-performance numerical computing library developed by Google Research. It combines NumPy's familiar API with the benefits of automatic differentiation and GPU/TPU acceleration. JAX is designed for machine learning research and large-scale numerical computations.

Pros

  • Automatic differentiation for efficient gradient computations
  • Seamless GPU and TPU acceleration
  • Just-in-time (JIT) compilation for improved performance
  • Functional programming paradigm for better composability and parallelism

Cons

  • Steeper learning curve compared to pure NumPy
  • Limited support for imperative-style programming
  • Smaller ecosystem compared to more established frameworks like TensorFlow or PyTorch
  • Some operations may be slower than in other libraries when not JIT-compiled

Code Examples

  1. Basic array operations:
import jax.numpy as jnp

x = jnp.array([1, 2, 3])
y = jnp.array([4, 5, 6])
z = jnp.dot(x, y)
print(z)  # Output: 32
  1. Automatic differentiation:
from jax import grad

def f(x):
    return x ** 2

df = grad(f)
print(df(3.0))  # Output: 6.0
  1. JIT compilation:
from jax import jit
import jax.numpy as jnp

@jit
def matrix_multiply(a, b):
    return jnp.dot(a, b)

a = jnp.array([[1, 2], [3, 4]])
b = jnp.array([[5, 6], [7, 8]])
result = matrix_multiply(a, b)
print(result)

Getting Started

To get started with JAX, follow these steps:

  1. Install JAX and its dependencies:
pip install --upgrade pip
pip install --upgrade "jax[cuda]" -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.html
  1. Import JAX in your Python script:
import jax
import jax.numpy as jnp
from jax import grad, jit, vmap
  1. Start using JAX for numerical computations and machine learning tasks:
# Define a function
def f(x):
    return jnp.sum(jnp.sin(x))

# Compute its gradient
grad_f = grad(f)

# Use it on some data
x = jnp.arange(5.0)
print(grad_f(x))

This will get you started with basic JAX functionality. For more advanced usage, refer to the official documentation and examples.

Competitor Comparisons

5,936

Flax is a neural network library for JAX that is designed for flexibility.

Pros of Flax

  • Higher-level API, making it easier to build and train neural networks
  • Includes pre-built modules and layers for common architectures
  • Better documentation and tutorials for beginners

Cons of Flax

  • Less flexible than JAX for custom, low-level operations
  • Smaller community and ecosystem compared to JAX
  • May have slightly higher overhead due to abstraction layers

Code Comparison

JAX (low-level):

import jax.numpy as jnp
from jax import grad, jit

def loss(params, x, y):
    return jnp.mean((params[0] * x + params[1] - y) ** 2)

grad_loss = jit(grad(loss))

Flax (high-level):

import flax.linen as nn

class LinearModel(nn.Module):
    @nn.compact
    def __call__(self, x):
        return nn.Dense(1)(x)

model = LinearModel()

Both JAX and Flax are powerful libraries for machine learning in Python, with JAX providing a low-level, flexible foundation and Flax offering a higher-level, more user-friendly interface built on top of JAX. The choice between them depends on the specific needs of the project and the developer's experience level.

82,049

Tensors and Dynamic neural networks in Python with strong GPU acceleration

Pros of PyTorch

  • Easier to debug with eager execution by default
  • More extensive ecosystem and community support
  • Better support for dynamic computational graphs

Cons of PyTorch

  • Generally slower compilation times compared to JAX
  • Less efficient automatic differentiation for large models

Code Comparison

PyTorch:

import torch

def model(x):
    return torch.nn.functional.relu(x)

x = torch.randn(5)
y = model(x)

JAX:

import jax.numpy as jnp
from jax import grad, jit

def model(x):
    return jnp.maximum(x, 0)

x = jnp.randn(5)
y = jit(model)(x)

Both frameworks offer similar syntax for defining and using models, but JAX's jit compilation can provide performance benefits for repeated computations. PyTorch's eager execution makes debugging easier, while JAX's functional approach and automatic differentiation can lead to more efficient and scalable code for certain use cases.

185,446

An Open Source Machine Learning Framework for Everyone

Pros of TensorFlow

  • Extensive ecosystem with tools like TensorBoard for visualization
  • Robust production deployment options (TensorFlow Serving, TensorFlow Lite)
  • Larger community and more extensive documentation

Cons of TensorFlow

  • More complex API compared to JAX's simplicity
  • Less flexibility for custom gradient computations
  • Slower compilation times for large models

Code Comparison

TensorFlow:

import tensorflow as tf

x = tf.constant([[1., 2.], [3., 4.]])
y = tf.matmul(x, x)

JAX:

import jax.numpy as jnp

x = jnp.array([[1., 2.], [3., 4.]])
y = jnp.matmul(x, x)

Both frameworks offer similar syntax for basic operations, but JAX provides a more NumPy-like interface. TensorFlow's API is more verbose, especially for more complex operations. JAX's functional approach and automatic differentiation make it easier to work with custom gradients and transformations, while TensorFlow's object-oriented approach can be more intuitive for some developers. TensorFlow's eager execution mode brings it closer to JAX's immediate execution style, but JAX still maintains an edge in terms of simplicity and flexibility for research-oriented tasks.

34,658

DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.

Pros of DeepSpeed

  • Optimized for distributed training on large-scale models
  • Supports ZeRO optimizer for efficient memory usage
  • Integrates well with PyTorch ecosystem

Cons of DeepSpeed

  • Steeper learning curve compared to JAX
  • Less flexible for non-deep learning tasks
  • Primarily focused on PyTorch, limiting cross-framework compatibility

Code Comparison

DeepSpeed:

model_engine, optimizer, _, _ = deepspeed.initialize(
    args=args, model=model, model_parameters=params
)
for step in range(steps):
    loss = model_engine(data)
    model_engine.backward(loss)
    model_engine.step()

JAX:

@jax.jit
def train_step(params, data):
    loss, grads = jax.value_and_grad(loss_fn)(params, data)
    params = optax.apply_updates(params, grads)
    return params, loss

for step in range(steps):
    params, loss = train_step(params, data)

DeepSpeed focuses on distributed training optimizations, while JAX provides a more general-purpose framework for numerical computing and machine learning. DeepSpeed is better suited for large-scale deep learning projects, especially those using PyTorch, while JAX offers greater flexibility and ease of use for a wider range of applications.

14,147

Distributed training framework for TensorFlow, Keras, PyTorch, and Apache MXNet.

Pros of Horovod

  • Designed specifically for distributed deep learning, offering excellent scalability across multiple GPUs and nodes
  • Supports multiple deep learning frameworks (TensorFlow, PyTorch, MXNet) with a unified API
  • Integrates well with existing codebases, requiring minimal changes to enable distributed training

Cons of Horovod

  • Limited to distributed training use cases, not as versatile for general-purpose machine learning tasks
  • Requires additional setup and configuration compared to JAX's simpler approach
  • May have a steeper learning curve for users not familiar with distributed computing concepts

Code Comparison

Horovod (with TensorFlow):

import horovod.tensorflow as hvd
hvd.init()
optimizer = tf.optimizers.Adam(lr * hvd.size())
optimizer = hvd.DistributedOptimizer(optimizer)

JAX:

from jax import pmap, grad
@pmap
def update(params, x, y):
    loss = loss_fn(params, x, y)
    return params - learning_rate * grad(loss_fn)(params, x, y)

Both JAX and Horovod offer powerful tools for machine learning, but they serve different purposes. JAX provides a flexible, high-performance framework for numerical computing and machine learning, while Horovod focuses on distributed deep learning across multiple GPUs and nodes.

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.

Pros of Transformers

  • Extensive pre-trained model library for various NLP tasks
  • High-level API for easy model fine-tuning and deployment
  • Active community and frequent updates

Cons of Transformers

  • Limited to transformer-based architectures
  • Heavier dependency footprint
  • Less flexibility for custom low-level operations

Code Comparison

Transformers example:

from transformers import BertTokenizer, BertForSequenceClassification
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertForSequenceClassification.from_pretrained('bert-base-uncased')

JAX example:

import jax.numpy as jnp
from jax import grad, jit
def loss(params, x, y):
    return jnp.sum((params * x - y)**2)
grad_loss = jit(grad(loss))

JAX offers more flexibility for custom computations and automatic differentiation, while Transformers provides ready-to-use models for NLP tasks. JAX is better suited for researchers and those needing low-level control, whereas Transformers is ideal for practitioners looking to quickly implement state-of-the-art NLP models.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

logo

Transformable numerical computing at scale

Continuous integration PyPI version

Quickstart | Transformations | Install guide | Neural net libraries | Change logs | Reference docs

What is JAX?

JAX is a Python library for accelerator-oriented array computation and program transformation, designed for high-performance numerical computing and large-scale machine learning.

With its updated version of Autograd, JAX can automatically differentiate native Python and NumPy functions. It can differentiate through loops, branches, recursion, and closures, and it can take derivatives of derivatives of derivatives. It supports reverse-mode differentiation (a.k.a. backpropagation) via grad as well as forward-mode differentiation, and the two can be composed arbitrarily to any order.

What’s new is that JAX uses XLA to compile and run your NumPy programs on GPUs and TPUs. Compilation happens under the hood by default, with library calls getting just-in-time compiled and executed. But JAX also lets you just-in-time compile your own Python functions into XLA-optimized kernels using a one-function API, jit. Compilation and automatic differentiation can be composed arbitrarily, so you can express sophisticated algorithms and get maximal performance without leaving Python. You can even program multiple GPUs or TPU cores at once using pmap, and differentiate through the whole thing.

Dig a little deeper, and you'll see that JAX is really an extensible system for composable function transformations. Both grad and jit are instances of such transformations. Others are vmap for automatic vectorization and pmap for single-program multiple-data (SPMD) parallel programming of multiple accelerators, with more to come.

This is a research project, not an official Google product. Expect bugs and sharp edges. Please help by trying it out, reporting bugs, and letting us know what you think!

import jax.numpy as jnp
from jax import grad, jit, vmap

def predict(params, inputs):
  for W, b in params:
    outputs = jnp.dot(inputs, W) + b
    inputs = jnp.tanh(outputs)  # inputs to the next layer
  return outputs                # no activation on last layer

def loss(params, inputs, targets):
  preds = predict(params, inputs)
  return jnp.sum((preds - targets)**2)

grad_loss = jit(grad(loss))  # compiled gradient evaluation function
perex_grads = jit(vmap(grad_loss, in_axes=(None, 0, 0)))  # fast per-example grads

Contents

Quickstart: Colab in the Cloud

Jump right in using a notebook in your browser, connected to a Google Cloud GPU. Here are some starter notebooks:

JAX now runs on Cloud TPUs. To try out the preview, see the Cloud TPU Colabs.

For a deeper dive into JAX:

Transformations

At its core, JAX is an extensible system for transforming numerical functions. Here are four transformations of primary interest: grad, jit, vmap, and pmap.

Automatic differentiation with grad

JAX has roughly the same API as Autograd. The most popular function is grad for reverse-mode gradients:

from jax import grad
import jax.numpy as jnp

def tanh(x):  # Define a function
  y = jnp.exp(-2.0 * x)
  return (1.0 - y) / (1.0 + y)

grad_tanh = grad(tanh)  # Obtain its gradient function
print(grad_tanh(1.0))   # Evaluate it at x = 1.0
# prints 0.4199743

You can differentiate to any order with grad.

print(grad(grad(grad(tanh)))(1.0))
# prints 0.62162673

For more advanced autodiff, you can use jax.vjp for reverse-mode vector-Jacobian products and jax.jvp for forward-mode Jacobian-vector products. The two can be composed arbitrarily with one another, and with other JAX transformations. Here's one way to compose those to make a function that efficiently computes full Hessian matrices:

from jax import jit, jacfwd, jacrev

def hessian(fun):
  return jit(jacfwd(jacrev(fun)))

As with Autograd, you're free to use differentiation with Python control structures:

def abs_val(x):
  if x > 0:
    return x
  else:
    return -x

abs_val_grad = grad(abs_val)
print(abs_val_grad(1.0))   # prints 1.0
print(abs_val_grad(-1.0))  # prints -1.0 (abs_val is re-evaluated)

See the reference docs on automatic differentiation and the JAX Autodiff Cookbook for more.

Compilation with jit

You can use XLA to compile your functions end-to-end with jit, used either as an @jit decorator or as a higher-order function.

import jax.numpy as jnp
from jax import jit

def slow_f(x):
  # Element-wise ops see a large benefit from fusion
  return x * x + x * 2.0

x = jnp.ones((5000, 5000))
fast_f = jit(slow_f)
%timeit -n10 -r3 fast_f(x)  # ~ 4.5 ms / loop on Titan X
%timeit -n10 -r3 slow_f(x)  # ~ 14.5 ms / loop (also on GPU via JAX)

You can mix jit and grad and any other JAX transformation however you like.

Using jit puts constraints on the kind of Python control flow the function can use; see the Gotchas Notebook for more.

Auto-vectorization with vmap

vmap is the vectorizing map. It has the familiar semantics of mapping a function along array axes, but instead of keeping the loop on the outside, it pushes the loop down into a function’s primitive operations for better performance.

Using vmap can save you from having to carry around batch dimensions in your code. For example, consider this simple unbatched neural network prediction function:

def predict(params, input_vec):
  assert input_vec.ndim == 1
  activations = input_vec
  for W, b in params:
    outputs = jnp.dot(W, activations) + b  # `activations` on the right-hand side!
    activations = jnp.tanh(outputs)        # inputs to the next layer
  return outputs                           # no activation on last layer

We often instead write jnp.dot(activations, W) to allow for a batch dimension on the left side of activations, but we’ve written this particular prediction function to apply only to single input vectors. If we wanted to apply this function to a batch of inputs at once, semantically we could just write

from functools import partial
predictions = jnp.stack(list(map(partial(predict, params), input_batch)))

But pushing one example through the network at a time would be slow! It’s better to vectorize the computation, so that at every layer we’re doing matrix-matrix multiplication rather than matrix-vector multiplication.

The vmap function does that transformation for us. That is, if we write

from jax import vmap
predictions = vmap(partial(predict, params))(input_batch)
# or, alternatively
predictions = vmap(predict, in_axes=(None, 0))(params, input_batch)

then the vmap function will push the outer loop inside the function, and our machine will end up executing matrix-matrix multiplications exactly as if we’d done the batching by hand.

It’s easy enough to manually batch a simple neural network without vmap, but in other cases manual vectorization can be impractical or impossible. Take the problem of efficiently computing per-example gradients: that is, for a fixed set of parameters, we want to compute the gradient of our loss function evaluated separately at each example in a batch. With vmap, it’s easy:

per_example_gradients = vmap(partial(grad(loss), params))(inputs, targets)

Of course, vmap can be arbitrarily composed with jit, grad, and any other JAX transformation! We use vmap with both forward- and reverse-mode automatic differentiation for fast Jacobian and Hessian matrix calculations in jax.jacfwd, jax.jacrev, and jax.hessian.

SPMD programming with pmap

For parallel programming of multiple accelerators, like multiple GPUs, use pmap. With pmap you write single-program multiple-data (SPMD) programs, including fast parallel collective communication operations. Applying pmap will mean that the function you write is compiled by XLA (similarly to jit), then replicated and executed in parallel across devices.

Here's an example on an 8-GPU machine:

from jax import random, pmap
import jax.numpy as jnp

# Create 8 random 5000 x 6000 matrices, one per GPU
keys = random.split(random.key(0), 8)
mats = pmap(lambda key: random.normal(key, (5000, 6000)))(keys)

# Run a local matmul on each device in parallel (no data transfer)
result = pmap(lambda x: jnp.dot(x, x.T))(mats)  # result.shape is (8, 5000, 5000)

# Compute the mean on each device in parallel and print the result
print(pmap(jnp.mean)(result))
# prints [1.1566595 1.1805978 ... 1.2321935 1.2015157]

In addition to expressing pure maps, you can use fast collective communication operations between devices:

from functools import partial
from jax import lax

@partial(pmap, axis_name='i')
def normalize(x):
  return x / lax.psum(x, 'i')

print(normalize(jnp.arange(4.)))
# prints [0.         0.16666667 0.33333334 0.5       ]

You can even nest pmap functions for more sophisticated communication patterns.

It all composes, so you're free to differentiate through parallel computations:

from jax import grad

@pmap
def f(x):
  y = jnp.sin(x)
  @pmap
  def g(z):
    return jnp.cos(z) * jnp.tan(y.sum()) * jnp.tanh(x).sum()
  return grad(lambda w: jnp.sum(g(w)))(x)

print(f(x))
# [[ 0.        , -0.7170853 ],
#  [-3.1085174 , -0.4824318 ],
#  [10.366636  , 13.135289  ],
#  [ 0.22163185, -0.52112055]]

print(grad(lambda x: jnp.sum(f(x)))(x))
# [[ -3.2369726,  -1.6356447],
#  [  4.7572474,  11.606951 ],
#  [-98.524414 ,  42.76499  ],
#  [ -1.6007166,  -1.2568436]]

When reverse-mode differentiating a pmap function (e.g. with grad), the backward pass of the computation is parallelized just like the forward pass.

See the SPMD Cookbook and the SPMD MNIST classifier from scratch example for more.

Current gotchas

For a more thorough survey of current gotchas, with examples and explanations, we highly recommend reading the Gotchas Notebook. Some standouts:

  1. JAX transformations only work on pure functions, which don't have side-effects and respect referential transparency (i.e. object identity testing with is isn't preserved). If you use a JAX transformation on an impure Python function, you might see an error like Exception: Can't lift Traced... or Exception: Different traces at same level.
  2. In-place mutating updates of arrays, like x[i] += y, aren't supported, but there are functional alternatives. Under a jit, those functional alternatives will reuse buffers in-place automatically.
  3. Random numbers are different, but for good reasons.
  4. If you're looking for convolution operators, they're in the jax.lax package.
  5. JAX enforces single-precision (32-bit, e.g. float32) values by default, and to enable double-precision (64-bit, e.g. float64) one needs to set the jax_enable_x64 variable at startup (or set the environment variable JAX_ENABLE_X64=True). On TPU, JAX uses 32-bit values by default for everything except internal temporary variables in 'matmul-like' operations, such as jax.numpy.dot and lax.conv. Those ops have a precision parameter which can be used to approximate 32-bit operations via three bfloat16 passes, with a cost of possibly slower runtime. Non-matmul operations on TPU lower to implementations that often emphasize speed over accuracy, so in practice computations on TPU will be less precise than similar computations on other backends.
  6. Some of NumPy's dtype promotion semantics involving a mix of Python scalars and NumPy types aren't preserved, namely np.add(1, np.array([2], np.float32)).dtype is float64 rather than float32.
  7. Some transformations, like jit, constrain how you can use Python control flow. You'll always get loud errors if something goes wrong. You might have to use jit's static_argnums parameter, structured control flow primitives like lax.scan, or just use jit on smaller subfunctions.

Installation

Supported platforms

Linux x86_64Linux aarch64Mac x86_64Mac ARMWindows x86_64Windows WSL2 x86_64
CPUyesyesyesyesyesyes
NVIDIA GPUyesyesnon/anoexperimental
Google TPUyesn/an/an/an/an/a
AMD GPUexperimentalnonon/anono
Apple GPUn/anoexperimentalexperimentaln/an/a

Instructions

HardwareInstructions
CPUpip install -U jax
NVIDIA GPUpip install -U "jax[cuda12]"
Google TPUpip install -U "jax[tpu]" -f https://storage.googleapis.com/jax-releases/libtpu_releases.html
AMD GPUUse Docker or build from source.
Apple GPUFollow Apple's instructions.

See the documentation for information on alternative installation strategies. These include compiling from source, installing with Docker, using other versions of CUDA, a community-supported conda build, and answers to some frequently-asked questions.

Neural network libraries

Multiple Google research groups develop and share libraries for training neural networks in JAX. If you want a fully featured library for neural network training with examples and how-to guides, try Flax. Check out the new NNX API for a simplified development experience.

Google X maintains the neural network library Equinox. This is used as the foundation for several other libraries in the JAX ecosystem.

In addition, DeepMind has open-sourced an ecosystem of libraries around JAX including Optax for gradient processing and optimization, RLax for RL algorithms, and chex for reliable code and testing. (Watch the NeurIPS 2020 JAX Ecosystem at DeepMind talk here)

Citing JAX

To cite this repository:

@software{jax2018github,
  author = {James Bradbury and Roy Frostig and Peter Hawkins and Matthew James Johnson and Chris Leary and Dougal Maclaurin and George Necula and Adam Paszke and Jake Vander{P}las and Skye Wanderman-{M}ilne and Qiao Zhang},
  title = {{JAX}: composable transformations of {P}ython+{N}um{P}y programs},
  url = {http://github.com/jax-ml/jax},
  version = {0.3.13},
  year = {2018},
}

In the above bibtex entry, names are in alphabetical order, the version number is intended to be that from jax/version.py, and the year corresponds to the project's open-source release.

A nascent version of JAX, supporting only automatic differentiation and compilation to XLA, was described in a paper that appeared at SysML 2018. We're currently working on covering JAX's ideas and capabilities in a more comprehensive and up-to-date paper.

Reference documentation

For details about the JAX API, see the reference documentation.

For getting started as a JAX developer, see the developer documentation.