Convert Figma logo to code with AI

facebook logoAx

Adaptive Experimentation Platform

2,340
303
2,340
51

Top Related Projects

10,484

A hyperparameter optimization framework

3,055

Bayesian optimization in PyTorch

A Python implementation of global optimization with gaussian processes.

Distributed Asynchronous Hyperparameter Optimization in Python

18,287

Open source platform for the machine learning lifecycle

Quick Overview

Ax is an open-source, modular, and accessible platform for adaptive experimentation developed by Facebook. It is designed to optimize any kind of experiment, from A/B tests to complex, multi-armed bandit problems. Ax integrates with PyTorch and leverages Bayesian optimization to efficiently explore large parameter spaces.

Pros

  • Flexible and modular architecture, allowing for easy customization and extension
  • Powerful Bayesian optimization capabilities for efficient parameter tuning
  • Seamless integration with PyTorch for machine learning experiments
  • Comprehensive documentation and examples for various use cases

Cons

  • Steep learning curve for users unfamiliar with Bayesian optimization
  • Limited support for distributed experiments across multiple machines
  • Requires additional setup and dependencies for certain advanced features
  • May be overkill for simple A/B testing scenarios

Code Examples

  1. Creating a simple experiment:
from ax import SimpleExperiment, ParameterType, RangeParameter, ChoiceParameter

experiment = SimpleExperiment(
    name="test_experiment",
    parameters=[
        RangeParameter(name="x1", lower=0, upper=1, parameter_type=ParameterType.FLOAT),
        ChoiceParameter(name="x2", values=["a", "b", "c"], parameter_type=ParameterType.STRING),
    ],
    objective_name="metric",
    minimize=False,
)
  1. Running trials and logging data:
from ax.utils.measurement.synthetic_functions import hartmann6

def evaluate(parameters):
    x = [parameters.get(f"x{i+1}") for i in range(6)]
    return {"metric": (hartmann6(x), 0.0)}

for _ in range(5):
    parameters, trial_index = experiment.new_trial()
    experiment.trial(trial_index).run()
    data = evaluate(parameters)
    experiment.trial(trial_index).complete(data)
  1. Retrieving the best parameters:
best_parameters, values = experiment.get_best_parameters()
print(f"Best parameters: {best_parameters}")
print(f"Best objective value: {values[0]['metric']}")

Getting Started

To get started with Ax, follow these steps:

  1. Install Ax using pip:

    pip install ax-platform
    
  2. Import the necessary modules:

    from ax import SimpleExperiment, ParameterType, RangeParameter
    from ax.utils.measurement.synthetic_functions import hartmann6
    
  3. Create an experiment, define parameters, and run trials:

    experiment = SimpleExperiment(
        name="my_experiment",
        parameters=[
            RangeParameter(name="x1", lower=0, upper=1, parameter_type=ParameterType.FLOAT),
            RangeParameter(name="x2", lower=0, upper=1, parameter_type=ParameterType.FLOAT),
        ],
        objective_name="hartmann6",
        minimize=True,
    )
    
    def evaluate(parameters):
        x = [parameters.get(f"x{i+1}", 0.0) for i in range(6)]
        return {"hartmann6": (hartmann6(x), 0.0)}
    
    for _ in range(10):
        parameters, trial_index = experiment.new_trial()
        experiment.trial(trial_index).run()
        data = evaluate(parameters)
        experiment.trial(trial_index).complete(data)
    
    best_parameters, values = experiment.get_best_parameters()
    print(f"Best parameters: {best_parameters}")
    print(f"Best objective value: {values[0]['hartmann6']}")
    

Competitor Comparisons

10,484

A hyperparameter optimization framework

Pros of Optuna

  • More lightweight and easier to integrate into existing projects
  • Supports a wider range of optimization algorithms, including pruning methods
  • Offers better visualization tools for hyperparameter importance and optimization history

Cons of Optuna

  • Less suitable for multi-objective optimization compared to Ax
  • Lacks built-in support for Bayesian optimization with contextual information

Code Comparison

Optuna:

import optuna

def objective(trial):
    x = trial.suggest_float('x', -10, 10)
    return (x - 2) ** 2

study = optuna.create_study()
study.optimize(objective, n_trials=100)

Ax:

from ax import optimize

def evaluation_function(parameters):
    x = parameters.get("x")
    return {"objective": (x - 2) ** 2}

best_parameters, _, _ = optimize(
    parameters=[{"name": "x", "type": "range", "bounds": [-10, 10]}],
    evaluation_function=evaluation_function,
    minimize=True,
)

Both Optuna and Ax are powerful hyperparameter optimization frameworks, but they cater to different use cases. Optuna is more flexible and easier to use for simple optimization tasks, while Ax provides more advanced features for complex experimental designs and multi-objective optimization.

3,055

Bayesian optimization in PyTorch

Pros of BoTorch

  • More flexible and customizable for advanced users
  • Deeper integration with PyTorch for neural network-based models
  • Better suited for research and development of new Bayesian optimization algorithms

Cons of BoTorch

  • Steeper learning curve for beginners
  • Requires more manual setup and configuration
  • Less out-of-the-box functionality for common use cases

Code Comparison

Ax example:

from ax import optimize

best_parameters, values, experiment, model = optimize(
    parameters=[
        {"name": "x1", "type": "range", "bounds": [-10, 10]},
        {"name": "x2", "type": "range", "bounds": [-10, 10]},
    ],
    evaluation_function=lambda p: (p["x1"] - 1)**2 + (p["x2"] - 2)**2,
    minimize=True,
)

BoTorch example:

import torch
from botorch.models import SingleTaskGP
from botorch.fit import fit_gpytorch_model
from botorch.acquisition import ExpectedImprovement
from botorch.optim import optimize_acqf

train_X = torch.rand(10, 2)
train_Y = (train_X[:, 0] - 1)**2 + (train_X[:, 1] - 2)**2
model = SingleTaskGP(train_X, train_Y)
mll = ExactMarginalLogLikelihood(model.likelihood, model)
fit_gpytorch_model(mll)

A Python implementation of global optimization with gaussian processes.

Pros of BayesianOptimization

  • Lightweight and easy to use, with a simple API
  • Focuses specifically on Bayesian optimization, making it more straightforward for this particular task
  • Well-documented with clear examples and tutorials

Cons of BayesianOptimization

  • Less feature-rich compared to Ax, with fewer advanced optimization techniques
  • Limited support for complex experiment designs and multi-objective optimization
  • Smaller community and less frequent updates

Code Comparison

BayesianOptimization:

from bayes_opt import BayesianOptimization

def black_box_function(x, y):
    return -x**2 - (y-1)**2 + 1

optimizer = BayesianOptimization(
    f=black_box_function,
    pbounds={'x': (-2, 2), 'y': (-3, 3)},
    random_state=1,
)

optimizer.maximize(init_points=2, n_iter=3)

Ax:

from ax import optimize

def evaluation_function(parameters):
    x, y = parameters["x"], parameters["y"]
    return -x**2 - (y-1)**2 + 1

best_parameters, values, experiment, model = optimize(
    parameters=[
        {"name": "x", "type": "range", "bounds": [-2.0, 2.0]},
        {"name": "y", "type": "range", "bounds": [-3.0, 3.0]},
    ],
    evaluation_function=evaluation_function,
    objective_name="score",
    total_trials=5,
)

Distributed Asynchronous Hyperparameter Optimization in Python

Pros of Hyperopt

  • More mature and established project with a larger user base
  • Supports a wider range of optimization algorithms, including Tree of Parzen Estimators (TPE)
  • Easier to integrate with existing Python projects due to its simplicity

Cons of Hyperopt

  • Less actively maintained compared to Ax
  • Lacks some advanced features like multi-objective optimization and bandit optimization
  • Limited built-in visualization tools

Code Comparison

Hyperopt:

from hyperopt import fmin, tpe, hp

def objective(x):
    return x**2

best = fmin(fn=objective, space=hp.uniform('x', -10, 10), algo=tpe.suggest, max_evals=100)

Ax:

from ax import optimize

def objective(x):
    return x**2

best, _ = optimize(parameters=[{"name": "x", "type": "range", "bounds": [-10, 10]}],
                   evaluation_function=objective,
                   minimize=True)

Both libraries offer concise ways to define and optimize objective functions, but Ax provides a more structured approach with its parameter definitions and built-in optimization methods.

18,287

Open source platform for the machine learning lifecycle

Pros of MLflow

  • More comprehensive end-to-end ML lifecycle management
  • Broader language support (Python, R, Java, and more)
  • Larger community and ecosystem of integrations

Cons of MLflow

  • Less specialized for hyperparameter optimization
  • May require more setup for specific use cases
  • Can be overwhelming for simple projects

Code Comparison

MLflow:

import mlflow

mlflow.start_run()
mlflow.log_param("learning_rate", 0.01)
mlflow.log_metric("accuracy", 0.85)
mlflow.end_run()

Ax:

from ax import optimize

def evaluate(params):
    return {"accuracy": params["learning_rate"] * 10}

best_params, _ = optimize(
    parameters=[{"name": "learning_rate", "type": "range", "bounds": [0.001, 0.1]}],
    evaluation_function=evaluate,
    objective_name="accuracy",
)

Summary

MLflow offers a more comprehensive solution for ML lifecycle management, supporting various languages and integrations. It's suitable for a wide range of projects but may be overkill for simpler tasks. Ax, on the other hand, specializes in hyperparameter optimization and experimentation, making it more focused but potentially less versatile for full ML pipeline management.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

Ax Logo

Support Ukraine Build Status Build Status Build Status Build Status codecov Build Status

Ax is an accessible, general-purpose platform for understanding, managing, deploying, and automating adaptive experiments.

Adaptive experimentation is the machine-learning guided process of iteratively exploring a (possibly infinite) parameter space in order to identify optimal configurations in a resource-efficient manner. Ax currently supports Bayesian optimization and bandit optimization as exploration strategies. Bayesian optimization in Ax is powered by BoTorch, a modern library for Bayesian optimization research built on PyTorch.

For full documentation and tutorials, see the Ax website

Why Ax?

  • Versatility: Ax supports different kinds of experiments, from dynamic ML-assisted A/B testing, to hyperparameter optimization in machine learning.
  • Customization: Ax makes it easy to add new modeling and decision algorithms, enabling research and development with minimal overhead.
  • Production-completeness: Ax comes with storage integration and ability to fully save and reload experiments.
  • Support for multi-modal and constrained experimentation: Ax allows for running and combining multiple experiments (e.g. simulation with a real-world "online" A/B test) and for constrained optimization (e.g. improving classification accuracy without significant increase in resource-utilization).
  • Efficiency in high-noise setting: Ax offers state-of-the-art algorithms specifically geared to noisy experiments, such as simulations with reinforcement-learning agents.
  • Ease of use: Ax includes 3 different APIs that strike different balances between lightweight structure and flexibility. The Service API (recommended for the vast majority of use-cases) provides an extensive, robust, and easy-to-use interface to Ax; the Loop API enables particularly concise usage; and the Developer API enables advanced experimental and methodological control.

Getting Started

To run a simple optimization loop in Ax (using the Booth response surface as the artificial evaluation function):

>>> from ax import optimize
>>> best_parameters, best_values, experiment, model = optimize(
        parameters=[
          {
            "name": "x1",
            "type": "range",
            "bounds": [-10.0, 10.0],
          },
          {
            "name": "x2",
            "type": "range",
            "bounds": [-10.0, 10.0],
          },
        ],
        # Booth function
        evaluation_function=lambda p: (p["x1"] + 2*p["x2"] - 7)**2 + (2*p["x1"] + p["x2"] - 5)**2,
        minimize=True,
    )

# best_parameters contains {'x1': 1.02, 'x2': 2.97}; the global min is (1, 3)

Installation

Requirements

You need Python 3.10 or later to run Ax.

The required Python dependencies are:

  • botorch
  • jinja2
  • pandas
  • scipy
  • sklearn
  • plotly >=2.2.1

Stable Version

Installing via pip

We recommend installing Ax via pip (even if using Conda environment):

conda install pytorch torchvision -c pytorch  # OSX only (details below)
pip install ax-platform

Installation will use Python wheels from PyPI, available for OSX, Linux, and Windows.

Note: Make sure the pip being used to install ax-platform is actually the one from the newly created Conda environment. If you're using a Unix-based OS, you can use which pip to check.

Recommendation for MacOS users: PyTorch is a required dependency of BoTorch, and can be automatically installed via pip. However, we recommend you install PyTorch manually before installing Ax, using the Anaconda package manager. Installing from Anaconda will link against MKL (a library that optimizes mathematical computation for Intel processors). This will result in up to an order-of-magnitude speed-up for Bayesian optimization, as at the moment, installing PyTorch from pip does not link against MKL.

If you need CUDA on MacOS, you will need to build PyTorch from source. Please consult the PyTorch installation instructions above.

Optional Dependencies

To use Ax with a notebook environment, you will need Jupyter. Install it first:

pip install jupyter

If you want to store the experiments in MySQL, you will need SQLAlchemy:

pip install SQLAlchemy

Latest Version

Installing from Git

You can install the latest (bleeding edge) version from Git.

First, see recommendation for installing PyTorch for MacOS users above.

At times, the bleeding edge for Ax can depend on bleeding edge versions of BoTorch (or GPyTorch). We therefore recommend installing those from Git as well:

pip install git+https://github.com/cornellius-gp/linear_operator.git
pip install git+https://github.com/cornellius-gp/gpytorch.git
export ALLOW_LATEST_GPYTORCH_LINOP=true
pip install git+https://github.com/pytorch/botorch.git
export ALLOW_BOTORCH_LATEST=true
pip install git+https://github.com/facebook/Ax.git#egg=ax-platform

Optional Dependencies

If using Ax in Jupyter notebooks:

pip install git+https://github.com/facebook/Ax.git#egg=ax-platform[notebook]

To support plotly-based plotting in newer Jupyter notebook versions

pip install "notebook>=5.3" "ipywidgets==7.5"

See Plotly repo's README for details and JupyterLab instructions.

If storing Ax experiments via SQLAlchemy in MySQL or SQLite:

pip install git+https://github.com/facebook/Ax.git#egg=ax-platform[mysql]

Join the Ax Community

Getting help

Please open an issue on our issues page with any questions, feature requests or bug reports! If posting a bug report, please include a minimal reproducible example (as a code snippet) that we can use to reproduce and debug the problem you encountered.

Contributing

See the CONTRIBUTING file for how to help out.

When contributing to Ax, we recommend cloning the repository and installing all optional dependencies:

pip install git+https://github.com/cornellius-gp/linear_operator.git
pip install git+https://github.com/cornellius-gp/gpytorch.git
export ALLOW_LATEST_GPYTORCH_LINOP=true
pip install git+https://github.com/pytorch/botorch.git
export ALLOW_BOTORCH_LATEST=true
git clone https://github.com/facebook/ax.git --depth 1
cd ax
pip install -e .[tutorial]

See recommendation for installing PyTorch for MacOS users above.

The above example limits the cloned directory size via the --depth argument to git clone. If you require the entire commit history you may remove this argument.

License

Ax is licensed under the MIT license.