Top Related Projects
Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libraries empower developers and stakeholders of AI systems to develop and monitor AI more responsibly, and take better data-driven actions.
A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.
Model interpretability and understanding for PyTorch
Quick Overview
Alibi Detect is an open-source Python library focused on outlier, adversarial, and drift detection. It provides a set of algorithms and tools to identify anomalies in data, detect adversarial attacks on machine learning models, and monitor for distribution shifts in data over time. The library is designed to work with both TensorFlow and PyTorch backends.
Pros
- Comprehensive suite of detection algorithms for various use cases
- Supports both TensorFlow and PyTorch backends
- Well-documented with extensive examples and tutorials
- Integrates seamlessly with other popular data science and machine learning libraries
Cons
- May have a steeper learning curve for beginners due to its wide range of functionalities
- Some advanced features might require in-depth understanding of underlying concepts
- Performance can be computationally intensive for large datasets or complex models
- Limited support for certain specialized domains or niche detection scenarios
Code Examples
- Outlier detection using Mahalanobis distance:
from alibi_detect.od import MahalanobisOD
import numpy as np
# Generate random data
X = np.random.randn(1000, 5)
# Initialize and fit the detector
od = MahalanobisOD()
od.fit(X)
# Predict on new data
X_test = np.random.randn(100, 5)
predictions = od.predict(X_test)
- Drift detection using Maximum Mean Discrepancy:
from alibi_detect.cd import MMDDrift
import numpy as np
# Generate reference and test data
X_ref = np.random.randn(1000, 10)
X_test = np.random.randn(100, 10) + 0.5
# Initialize and fit the detector
cd = MMDDrift(X_ref, p_val=.05)
# Predict drift
predictions = cd.predict(X_test)
- Adversarial detection using model uncertainty:
from alibi_detect.ad import AdversarialAE
import tensorflow as tf
# Load a pre-trained model and data
model = tf.keras.models.load_model('my_model.h5')
X_train, y_train = load_data()
# Initialize and fit the detector
ad = AdversarialAE(
model,
encoder_net=None,
decoder_net=None,
threshold=0.1
)
ad.fit(X_train)
# Predict on potentially adversarial examples
X_test = load_test_data()
predictions = ad.predict(X_test)
Getting Started
To get started with Alibi Detect, follow these steps:
- Install the library:
pip install alibi-detect
- Import the necessary modules:
from alibi_detect.od import IForest
from alibi_detect.cd import KSDrift
from alibi_detect.ad import AdversarialAE
-
Choose an appropriate detector for your use case (e.g., outlier, drift, or adversarial) and initialize it with relevant parameters.
-
Fit the detector on your reference or training data.
-
Use the
predict
method to detect anomalies, drift, or adversarial examples in new data.
For more detailed examples and usage instructions, refer to the official documentation and tutorials on the Alibi Detect GitHub repository.
Competitor Comparisons
Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libraries empower developers and stakeholders of AI systems to develop and monitor AI more responsibly, and take better data-driven actions.
Pros of Responsible AI Toolbox
- Broader scope, covering various aspects of responsible AI beyond just anomaly detection
- Integrates well with Azure Machine Learning and other Microsoft services
- Provides interactive visualizations and dashboards for easier interpretation
Cons of Responsible AI Toolbox
- More complex setup and usage compared to Alibi Detect
- Primarily focused on tabular data, with limited support for other data types
- Less specialized in drift detection and outlier detection compared to Alibi Detect
Code Comparison
Alibi Detect (drift detection):
from alibi_detect.cd import TabularDrift
cd = TabularDrift(X_ref, p_val=.05, categories_per_feature=categories)
preds = cd.predict(X_test)
Responsible AI Toolbox (model interpretability):
from raiwidgets import ExplanationDashboard
ExplanationDashboard(global_explanation, model, dataset, true_y, features)
Both libraries offer powerful tools for responsible AI, but Alibi Detect is more focused on drift and anomaly detection, while Responsible AI Toolbox provides a broader set of features for model interpretability, fairness assessment, and error analysis.
A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.
Pros of AIF360
- Comprehensive suite of fairness metrics and algorithms
- Extensive documentation and educational resources
- Supports multiple programming languages (Python, R, and JavaScript)
Cons of AIF360
- Steeper learning curve due to its extensive feature set
- Less focus on drift detection and outlier detection
Code Comparison
AIF360:
from aif360.datasets import BinaryLabelDataset
from aif360.metrics import BinaryLabelDatasetMetric
dataset = BinaryLabelDataset(...)
metric = BinaryLabelDatasetMetric(dataset, unprivileged_groups, privileged_groups)
alibi-detect:
from alibi_detect.cd import TabularDrift
cd = TabularDrift(X_ref, p_val=.05, categories_per_feature=categories_per_feature)
preds = cd.predict(X)
Summary
AIF360 is a comprehensive toolkit for fairness in machine learning, offering a wide range of metrics and algorithms across multiple programming languages. It provides extensive documentation and educational resources, making it ideal for in-depth fairness analysis and research.
alibi-detect, on the other hand, focuses more on drift detection, outlier detection, and adversarial detection. It offers a more streamlined approach to these specific tasks, potentially making it easier to integrate into existing ML pipelines for monitoring and detecting changes in data distributions.
The choice between the two depends on the specific needs of the project: AIF360 for comprehensive fairness analysis, or alibi-detect for focused drift and outlier detection in production environments.
Model interpretability and understanding for PyTorch
Pros of Captum
- Focused on interpretability and explainability for PyTorch models
- Extensive set of attribution algorithms and visualization tools
- Seamless integration with PyTorch ecosystem
Cons of Captum
- Limited to PyTorch models only
- Lacks specific drift detection and outlier detection capabilities
- Primarily focused on post-hoc explanations rather than real-time monitoring
Code Comparison
Captum (attribution example):
from captum.attr import IntegratedGradients
ig = IntegratedGradients(model)
attributions = ig.attribute(input, target=target_class)
Alibi Detect (drift detection example):
from alibi_detect.cd import MMDDrift
drift_detector = MMDDrift(X_ref, p_val=0.05)
prediction = drift_detector.predict(X_test)
Summary
Captum is tailored for PyTorch model interpretability, offering a wide range of attribution methods and visualizations. It excels in explaining model predictions but is limited to the PyTorch ecosystem.
Alibi Detect, on the other hand, focuses on drift detection, outlier detection, and adversarial detection across multiple frameworks. It provides real-time monitoring capabilities but may not offer as extensive interpretability features as Captum.
Choose Captum for in-depth PyTorch model explanations, and Alibi Detect for broader monitoring and detection across different ML frameworks.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
Alibi Detect is a Python library focused on outlier, adversarial and drift detection. The package aims to cover both online and offline detectors for tabular data, text, images and time series. Both TensorFlow and PyTorch backends are supported for drift detection.
For more background on the importance of monitoring outliers and distributions in a production setting, check out this talk from the Challenges in Deploying and Monitoring Machine Learning Systems ICML 2020 workshop, based on the paper Monitoring and explainability of models in production and referencing Alibi Detect.
For a thorough introduction to drift detection, check out Protecting Your Machine Learning Against Drift: An Introduction. The talk covers what drift is and why it pays to detect it, the different types of drift, how it can be detected in a principled manner and also describes the anatomy of a drift detector.
Table of Contents
Installation and Usage
The package, alibi-detect
can be installed from:
- PyPI or GitHub source (with
pip
) - Anaconda (with
conda
/mamba
)
With pip
-
alibi-detect can be installed from PyPI:
pip install alibi-detect
-
Alternatively, the development version can be installed:
pip install git+https://github.com/SeldonIO/alibi-detect.git
-
To install with the TensorFlow backend:
pip install alibi-detect[tensorflow]
-
To install with the PyTorch backend:
pip install alibi-detect[torch]
-
To install with the KeOps backend:
pip install alibi-detect[keops]
-
To use the
Prophet
time series outlier detector:pip install alibi-detect[prophet]
With conda
To install from conda-forge it is recommended to use mamba, which can be installed to the base conda enviroment with:
conda install mamba -n base -c conda-forge
To install alibi-detect:
mamba install -c conda-forge alibi-detect
Usage
We will use the VAE outlier detector to illustrate the API.
from alibi_detect.od import OutlierVAE
from alibi_detect.saving import save_detector, load_detector
# initialize and fit detector
od = OutlierVAE(threshold=0.1, encoder_net=encoder_net, decoder_net=decoder_net, latent_dim=1024)
od.fit(x_train)
# make predictions
preds = od.predict(x_test)
# save and load detectors
filepath = './my_detector/'
save_detector(od, filepath)
od = load_detector(filepath)
The predictions are returned in a dictionary with as keys meta
and data
. meta
contains the detector's metadata while data
is in itself a dictionary with the actual predictions. It contains the outlier, adversarial or drift scores and thresholds as well as the predictions whether instances are e.g. outliers or not. The exact details can vary slightly from method to method, so we encourage the reader to become familiar with the types of algorithms supported.
Supported Algorithms
The following tables show the advised use cases for each algorithm. The column Feature Level indicates whether the detection can be done at the feature level, e.g. per pixel for an image. Check the algorithm reference list for more information with links to the documentation and original papers as well as examples for each of the detectors.
Outlier Detection
Detector | Tabular | Image | Time Series | Text | Categorical Features | Online | Feature Level |
---|---|---|---|---|---|---|---|
Isolation Forest | â | â | |||||
Mahalanobis Distance | â | â | â | ||||
AE | â | â | â | ||||
VAE | â | â | â | ||||
AEGMM | â | â | |||||
VAEGMM | â | â | |||||
Likelihood Ratios | â | â | â | â | â | ||
Prophet | â | ||||||
Spectral Residual | â | â | â | ||||
Seq2Seq | â | â |
Adversarial Detection
Detector | Tabular | Image | Time Series | Text | Categorical Features | Online | Feature Level |
---|---|---|---|---|---|---|---|
Adversarial AE | â | â | |||||
Model distillation | â | â | â | â | â |
Drift Detection
Detector | Tabular | Image | Time Series | Text | Categorical Features | Online | Feature Level |
---|---|---|---|---|---|---|---|
Kolmogorov-Smirnov | â | â | â | â | â | ||
Cramér-von Mises | â | â | â | â | |||
Fisher's Exact Test | â | â | â | â | |||
Maximum Mean Discrepancy (MMD) | â | â | â | â | â | ||
Learned Kernel MMD | â | â | â | â | |||
Context-aware MMD | â | â | â | â | â | ||
Least-Squares Density Difference | â | â | â | â | â | ||
Chi-Squared | â | â | â | ||||
Mixed-type tabular data | â | â | â | ||||
Classifier | â | â | â | â | â | ||
Spot-the-diff | â | â | â | â | â | â | |
Classifier Uncertainty | â | â | â | â | â | ||
Regressor Uncertainty | â | â | â | â | â |
TensorFlow and PyTorch support
The drift detectors support TensorFlow, PyTorch and (where applicable) KeOps backends. However, Alibi Detect does not install these by default. See the installation options for more details.
from alibi_detect.cd import MMDDrift
cd = MMDDrift(x_ref, backend='tensorflow', p_val=.05)
preds = cd.predict(x)
The same detector in PyTorch:
cd = MMDDrift(x_ref, backend='pytorch', p_val=.05)
preds = cd.predict(x)
Or in KeOps:
cd = MMDDrift(x_ref, backend='keops', p_val=.05)
preds = cd.predict(x)
Built-in preprocessing steps
Alibi Detect also comes with various preprocessing steps such as randomly initialized encoders, pretrained text embeddings to detect drift on using the transformers library and extraction of hidden layers from machine learning models. This allows to detect different types of drift such as covariate and predicted distribution shift. The preprocessing steps are again supported in TensorFlow and PyTorch.
from alibi_detect.cd.tensorflow import HiddenOutput, preprocess_drift
model = # TensorFlow model; tf.keras.Model or tf.keras.Sequential
preprocess_fn = partial(preprocess_drift, model=HiddenOutput(model, layer=-1), batch_size=128)
cd = MMDDrift(x_ref, backend='tensorflow', p_val=.05, preprocess_fn=preprocess_fn)
preds = cd.predict(x)
Check the example notebooks (e.g. CIFAR10, movie reviews) for more details.
Reference List
Outlier Detection
-
Isolation Forest (FT Liu et al., 2008)
- Example: Network Intrusion
-
Mahalanobis Distance (Mahalanobis, 1936)
- Example: Network Intrusion
-
- Example: CIFAR10
-
Variational Auto-Encoder (VAE) (Kingma et al., 2013)
- Examples: Network Intrusion, CIFAR10
-
Auto-Encoding Gaussian Mixture Model (AEGMM) (Zong et al., 2018)
- Example: Network Intrusion
-
Variational Auto-Encoding Gaussian Mixture Model (VAEGMM)
- Example: Network Intrusion
-
Likelihood Ratios (Ren et al., 2019)
- Examples: Genome, Fashion-MNIST vs. MNIST
-
Prophet Time Series Outlier Detector (Taylor et al., 2018)
- Example: Weather Forecast
-
Spectral Residual Time Series Outlier Detector (Ren et al., 2019)
- Example: Synthetic Dataset
-
Sequence-to-Sequence (Seq2Seq) Outlier Detector (Sutskever et al., 2014; Park et al., 2017)
- Examples: ECG, Synthetic Dataset
Adversarial Detection
-
Adversarial Auto-Encoder (Vacanti and Van Looveren, 2020)
- Example: CIFAR10
-
- Example: CIFAR10
Drift Detection
-
- Example: CIFAR10, molecular graphs, movie reviews
-
- Example: Penguins
-
- Example: Penguins
-
Maximum Mean Discrepancy (Gretton et al, 2012)
- Example: CIFAR10, molecular graphs, movie reviews, Amazon reviews
-
Learned Kernel MMD (Liu et al, 2020)
- Example: CIFAR10
-
Context-aware MMD (Cobb and Van Looveren, 2022)
- Example: ECG, news topics
-
- Example: Income Prediction
-
- Example: Income Prediction
-
Classifier (Lopez-Paz and Oquab, 2017)
- Example: CIFAR10, Amazon reviews
-
Spot-the-diff (adaptation of Jitkrittum et al, 2016)
- Example MNIST and Wine quality
-
Classifier and Regressor Uncertainty
- Example: CIFAR10 and Wine, molecular graphs
-
Online Maximum Mean Discrepancy
- Example: Wine Quality, Camelyon medical imaging
-
Online Least-Squares Density Difference (Bu et al, 2017)
- Example: Wine Quality
Datasets
The package also contains functionality in alibi_detect.datasets
to easily fetch a number of datasets for different modalities. For each dataset either the data and labels or a Bunch object with the data, labels and optional metadata are returned. Example:
from alibi_detect.datasets import fetch_ecg
(X_train, y_train), (X_test, y_test) = fetch_ecg(return_X_y=True)
Sequential Data and Time Series
-
Genome Dataset:
fetch_genome
- Bacteria genomics dataset for out-of-distribution detection, released as part of Likelihood Ratios for Out-of-Distribution Detection. From the original TL;DR: The dataset contains genomic sequences of 250 base pairs from 10 in-distribution bacteria classes for training, 60 OOD bacteria classes for validation, and another 60 different OOD bacteria classes for test. There are respectively 1, 7 and again 7 million sequences in the training, validation and test sets. For detailed info on the dataset check the README.
from alibi_detect.datasets import fetch_genome (X_train, y_train), (X_val, y_val), (X_test, y_test) = fetch_genome(return_X_y=True)
-
ECG 5000:
fetch_ecg
- 5000 ECG's, originally obtained from Physionet.
-
NAB:
fetch_nab
- Any univariate time series in a DataFrame from the Numenta Anomaly Benchmark. A list with the available time series can be retrieved using
alibi_detect.datasets.get_list_nab()
.
- Any univariate time series in a DataFrame from the Numenta Anomaly Benchmark. A list with the available time series can be retrieved using
Images
-
CIFAR-10-C:
fetch_cifar10c
- CIFAR-10-C (Hendrycks & Dietterich, 2019) contains the test set of CIFAR-10, but corrupted and perturbed by various types of noise, blur, brightness etc. at different levels of severity, leading to a gradual decline in a classification model's performance trained on CIFAR-10.
fetch_cifar10c
allows you to pick any severity level or corruption type. The list with available corruption types can be retrieved withalibi_detect.datasets.corruption_types_cifar10c()
. The dataset can be used in research on robustness and drift. The original data can be found here. Example:
from alibi_detect.datasets import fetch_cifar10c corruption = ['gaussian_noise', 'motion_blur', 'brightness', 'pixelate'] X, y = fetch_cifar10c(corruption=corruption, severity=5, return_X_y=True)
- CIFAR-10-C (Hendrycks & Dietterich, 2019) contains the test set of CIFAR-10, but corrupted and perturbed by various types of noise, blur, brightness etc. at different levels of severity, leading to a gradual decline in a classification model's performance trained on CIFAR-10.
-
Adversarial CIFAR-10:
fetch_attack
- Load adversarial instances on a ResNet-56 classifier trained on CIFAR-10. Available attacks: Carlini-Wagner ('cw') and SLIDE ('slide'). Example:
from alibi_detect.datasets import fetch_attack (X_train, y_train), (X_test, y_test) = fetch_attack('cifar10', 'resnet56', 'cw', return_X_y=True)
Tabular
- KDD Cup '99:
fetch_kdd
- Dataset with different types of computer network intrusions.
fetch_kdd
allows you to select a subset of network intrusions as targets or pick only specified features. The original data can be found here.
- Dataset with different types of computer network intrusions.
Models
Models and/or building blocks that can be useful outside of outlier, adversarial or drift detection can be found under alibi_detect.models
. Main implementations:
-
PixelCNN++:
alibi_detect.models.pixelcnn.PixelCNN
-
Variational Autoencoder:
alibi_detect.models.autoencoder.VAE
-
Sequence-to-sequence model:
alibi_detect.models.autoencoder.Seq2Seq
-
ResNet:
alibi_detect.models.resnet
- Pre-trained ResNet-20/32/44 models on CIFAR-10 can be found on our Google Cloud Bucket and can be fetched as follows:
from alibi_detect.utils.fetching import fetch_tf_model model = fetch_tf_model('cifar10', 'resnet32')
Integrations
Alibi-detect is integrated in the machine learning model deployment platform Seldon Core and model serving framework KFServing.
Citations
If you use alibi-detect in your research, please consider citing it.
BibTeX entry:
@software{alibi-detect,
title = {Alibi Detect: Algorithms for outlier, adversarial and drift detection},
author = {Van Looveren, Arnaud and Klaise, Janis and Vacanti, Giovanni and Cobb, Oliver and Scillitoe, Ashley and Samoilescu, Robert and Athorne, Alex},
url = {https://github.com/SeldonIO/alibi-detect},
version = {0.12.1.dev0},
date = {2024-04-17},
year = {2019}
}
Top Related Projects
Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libraries empower developers and stakeholders of AI systems to develop and monitor AI more responsibly, and take better data-driven actions.
A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.
Model interpretability and understanding for PyTorch
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot