Top Related Projects
Fit interpretable models. Explain blackbox machine learning.
A game theoretic approach to explain the output of any machine learning model.
Lime: Explaining the predictions of any machine learning classifier
Interpretability and explainability of data and machine learning models
Quick Overview
Alibi is an open-source Python library focused on machine learning model inspection and interpretation. It provides a collection of algorithms for explaining predictions of black-box machine learning models and analyzing model behavior. Alibi supports various explanation methods for different types of models and data.
Pros
- Comprehensive set of explanation algorithms for different model types and data formats
- Supports both model-agnostic and model-specific explanation methods
- Well-documented with clear examples and tutorials
- Integrates well with popular machine learning frameworks like TensorFlow and PyTorch
Cons
- Learning curve can be steep for users new to model interpretation techniques
- Some advanced features may require additional dependencies
- Performance can be slow for large datasets or complex models
- Limited support for certain specialized model architectures
Code Examples
- Generating an Anchor explanation for a tabular classifier:
from alibi.explainers import AnchorTabular
from alibi.datasets import fetch_adult
adult = fetch_adult()
explainer = AnchorTabular(adult.predict_fn, adult.feature_names)
explanation = explainer.explain(adult.X_test[0])
print(explanation.anchor)
- Creating a Counterfactual explanation for an image classifier:
from alibi.explainers import Counterfactual
import tensorflow as tf
model = tf.keras.models.load_model('path/to/model')
explainer = Counterfactual(model, shape=(28, 28, 1), target_proba=0.99)
explanation = explainer.explain(X_test[0])
print(explanation.cf)
- Generating a SHAP explanation for a text classifier:
from alibi.explainers import KernelShap
from alibi.datasets import fetch_movie_sentiment
movie = fetch_movie_sentiment()
explainer = KernelShap(movie.predict_fn)
explanation = explainer.explain(movie.X_test[0])
print(explanation.shap_values)
Getting Started
To get started with Alibi, install it using pip:
pip install alibi
Then, import the desired explainer and use it on your model:
from alibi.explainers import AnchorTabular
from sklearn.ensemble import RandomForestClassifier
# Train your model
model = RandomForestClassifier()
model.fit(X_train, y_train)
# Create an explainer
explainer = AnchorTabular(model.predict, feature_names=feature_names)
# Generate an explanation
explanation = explainer.explain(X_test[0])
print(explanation.anchor)
Competitor Comparisons
Fit interpretable models. Explain blackbox machine learning.
Pros of interpret
- Broader range of interpretability techniques, including global explanations
- More extensive documentation and tutorials
- Stronger focus on model-agnostic interpretability methods
Cons of interpret
- Less specialized for specific ML frameworks like TensorFlow or PyTorch
- May have a steeper learning curve for beginners due to its comprehensive nature
Code Comparison
interpret:
from interpret import set_visualize_provider
from interpret.provider import InlineProvider
set_visualize_provider(InlineProvider())
from interpret.glassbox import ExplainableBoostingClassifier
ebm = ExplainableBoostingClassifier()
ebm.fit(X_train, y_train)
ebm_global = ebm.explain_global()
ebm_global.visualize()
alibi:
import alibi
from alibi.explainers import AnchorTabular
explainer = AnchorTabular(predict_fn, feature_names)
explanation = explainer.explain(X_test[0])
print(explanation.anchor)
Both libraries offer powerful interpretability tools, but interpret provides a more comprehensive suite of techniques and visualizations, while alibi focuses more on specific explainers like Anchors and Counterfactuals. interpret may be better suited for projects requiring a wide range of interpretability methods, while alibi might be preferable for those working with specific ML frameworks or seeking targeted explanations.
A game theoretic approach to explain the output of any machine learning model.
Pros of shap
- More widely adopted and mature project with a larger community
- Extensive documentation and tutorials available
- Supports a broader range of model types and use cases
Cons of shap
- Can be computationally expensive for large datasets or complex models
- May require more setup and configuration for certain use cases
- Limited built-in visualization options compared to Alibi
Code Comparison
shap example:
import shap
explainer = shap.TreeExplainer(model)
shap_values = explainer.shap_values(X)
shap.summary_plot(shap_values, X)
Alibi example:
from alibi.explainers import AnchorTabular
explainer = AnchorTabular(predict_fn, feature_names)
explanation = explainer.explain(X_instance)
print(explanation.anchor)
Both libraries offer powerful explainability tools, but shap focuses more on SHAP (SHapley Additive exPlanations) values, while Alibi provides a wider range of explanation methods including Anchors, Counterfactuals, and more. shap is generally more popular and well-established, but Alibi offers a broader toolkit for different explainability needs.
Lime: Explaining the predictions of any machine learning classifier
Pros of LIME
- Simpler and more lightweight implementation
- Widely adopted and well-established in the ML community
- Supports a broader range of model types out-of-the-box
Cons of LIME
- Limited to local explanations only
- Less comprehensive set of explanation methods
- Fewer built-in visualization options
Code Comparison
LIME example:
from lime import lime_tabular
explainer = lime_tabular.LimeTabularExplainer(X_train)
exp = explainer.explain_instance(X_test[0], clf.predict_proba)
Alibi example:
from alibi.explainers import AnchorTabular
explainer = AnchorTabular(predict_fn, feature_names)
explanation = explainer.explain(X_test[0])
Both libraries offer straightforward APIs for generating explanations, but Alibi provides a wider range of explainers and more advanced features. LIME focuses primarily on local interpretable model-agnostic explanations, while Alibi includes methods like Anchors, Counterfactuals, and Integrated Gradients.
Alibi offers more comprehensive documentation and examples, making it easier for users to understand and implement various explanation techniques. However, LIME's simplicity and widespread adoption make it a popular choice for quick and easy model interpretability.
Interpretability and explainability of data and machine learning models
Pros of AIX360
- Broader scope of explainability techniques, including prototypes and contrastive explanations
- Stronger focus on fairness metrics and bias mitigation
- More comprehensive documentation and tutorials
Cons of AIX360
- Less frequent updates and maintenance
- Heavier dependency on IBM-specific libraries
- More complex setup and integration process
Code Comparison
AIX360:
from aix360.algorithms.protodash import ProtodashExplainer
explainer = ProtodashExplainer()
(prototype_idx, prototype_weights) = explainer.explain(X, threshold=0.5)
Alibi:
from alibi.explainers import AnchorTabular
explainer = AnchorTabular(predict_fn, feature_names)
explanation = explainer.explain(X_test[0])
Summary
AIX360 offers a wider range of explainability techniques and a stronger focus on fairness, but comes with a more complex setup and IBM-specific dependencies. Alibi, on the other hand, provides a more streamlined experience with frequent updates, but has a narrower scope of explainability methods. The choice between the two depends on the specific requirements of the project and the desired balance between comprehensiveness and ease of use.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
Alibi is a Python library aimed at machine learning model inspection and interpretation. The focus of the library is to provide high-quality implementations of black-box, white-box, local and global explanation methods for classification and regression models.
If you're interested in outlier detection, concept drift or adversarial instance detection, check out our sister project alibi-detect.
Anchor explanations for images |
Integrated Gradients for text |
Counterfactual examples |
Accumulated Local Effects |
Table of Contents
Installation and Usage
Alibi can be installed from:
- PyPI or GitHub source (with
pip
) - Anaconda (with
conda
/mamba
)
With pip
-
Alibi can be installed from PyPI:
pip install alibi
-
Alternatively, the development version can be installed:
pip install git+https://github.com/SeldonIO/alibi.git
-
To take advantage of distributed computation of explanations, install
alibi
withray
:pip install alibi[ray]
-
For SHAP support, install
alibi
as follows:pip install alibi[shap]
With conda
To install from conda-forge it is recommended to use mamba, which can be installed to the base conda enviroment with:
conda install mamba -n base -c conda-forge
-
For the standard Alibi install:
mamba install -c conda-forge alibi
-
For distributed computing support:
mamba install -c conda-forge alibi ray
-
For SHAP support:
mamba install -c conda-forge alibi shap
Usage
The alibi explanation API takes inspiration from scikit-learn
, consisting of distinct initialize,
fit and explain steps. We will use the AnchorTabular
explainer to illustrate the API:
from alibi.explainers import AnchorTabular
# initialize and fit explainer by passing a prediction function and any other required arguments
explainer = AnchorTabular(predict_fn, feature_names=feature_names, category_map=category_map)
explainer.fit(X_train)
# explain an instance
explanation = explainer.explain(x)
The explanation returned is an Explanation
object with attributes meta
and data
. meta
is a dictionary
containing the explainer metadata and any hyperparameters and data
is a dictionary containing everything
related to the computed explanation. For example, for the Anchor algorithm the explanation can be accessed
via explanation.data['anchor']
(or explanation.anchor
). The exact details of available fields varies
from method to method so we encourage the reader to become familiar with the
types of methods supported.
Supported Methods
The following tables summarize the possible use cases for each method.
Model Explanations
Method | Models | Explanations | Classification | Regression | Tabular | Text | Images | Categorical features | Train set required | Distributed |
---|---|---|---|---|---|---|---|---|---|---|
ALE | BB | global | â | â | â | |||||
Partial Dependence | BB WB | global | â | â | â | â | ||||
PD Variance | BB WB | global | â | â | â | â | ||||
Permutation Importance | BB | global | â | â | â | â | ||||
Anchors | BB | local | â | â | â | â | â | For Tabular | ||
CEM | BB* TF/Keras | local | â | â | â | Optional | ||||
Counterfactuals | BB* TF/Keras | local | â | â | â | No | ||||
Prototype Counterfactuals | BB* TF/Keras | local | â | â | â | â | Optional | |||
Counterfactuals with RL | BB | local | â | â | â | â | â | |||
Integrated Gradients | TF/Keras | local | â | â | â | â | â | â | Optional | |
Kernel SHAP | BB | local global | â | â | â | â | â | â | ||
Tree SHAP | WB | local global | â | â | â | â | Optional | |||
Similarity explanations | WB | local | â | â | â | â | â | â | â |
Model Confidence
These algorithms provide instance-specific scores measuring the model confidence for making a particular prediction.
Method | Models | Classification | Regression | Tabular | Text | Images | Categorical Features | Train set required |
---|---|---|---|---|---|---|---|---|
Trust Scores | BB | â | â | â(1) | â(2) | Yes | ||
Linearity Measure | BB | â | â | â | â | Optional |
Key:
- BB - black-box (only require a prediction function)
- BB* - black-box but assume model is differentiable
- WB - requires white-box model access. There may be limitations on models supported
- TF/Keras - TensorFlow models via the Keras API
- Local - instance specific explanation, why was this prediction made?
- Global - explains the model with respect to a set of instances
- (1) - depending on model
- (2) - may require dimensionality reduction
Prototypes
These algorithms provide a distilled view of the dataset and help construct a 1-KNN interpretable classifier.
Method | Classification | Regression | Tabular | Text | Images | Categorical Features | Train set labels |
---|---|---|---|---|---|---|---|
ProtoSelect | â | â | â | â | â | Optional |
References and Examples
-
Accumulated Local Effects (ALE, Apley and Zhu, 2016)
- Documentation
- Examples: California housing dataset, Iris dataset
-
Partial Dependence (J.H. Friedman, 2001)
- Documentation
- Examples: Bike rental
-
Partial Dependence Variance(Greenwell et al., 2018)
- Documentation
- Examples: Friedmanâs regression problem
-
Permutation Importance(Breiman, 2001; Fisher et al., 2018)
- Documentation
- Examples: Who's Going to Leave Next?
-
Anchor explanations (Ribeiro et al., 2018)
-
Contrastive Explanation Method (CEM, Dhurandhar et al., 2018)
- Documentation
- Examples: MNIST, Iris dataset
-
Counterfactual Explanations (extension of Wachter et al., 2017)
- Documentation
- Examples: MNIST
-
Counterfactual Explanations Guided by Prototypes (Van Looveren and Klaise, 2019)
-
Model-agnostic Counterfactual Explanations via RL(Samoilescu et al., 2021)
- Documentation
- Examples: MNIST, Adult income
-
Integrated Gradients (Sundararajan et al., 2017)
- Documentation,
- Examples: MNIST example, Imagenet example, IMDB example.
-
Kernel Shapley Additive Explanations (Lundberg et al., 2017)
-
Tree Shapley Additive Explanations (Lundberg et al., 2020)
-
Trust Scores (Jiang et al., 2018)
- Documentation
- Examples: MNIST, Iris dataset
-
Linearity Measure
- Documentation
- Examples: Iris dataset, fashion MNIST
-
ProtoSelect
- Documentation
- Examples: Adult Census & CIFAR10
-
Similarity explanations
Citations
If you use alibi in your research, please consider citing it.
BibTeX entry:
@article{JMLR:v22:21-0017,
author = {Janis Klaise and Arnaud Van Looveren and Giovanni Vacanti and Alexandru Coca},
title = {Alibi Explain: Algorithms for Explaining Machine Learning Models},
journal = {Journal of Machine Learning Research},
year = {2021},
volume = {22},
number = {181},
pages = {1-7},
url = {http://jmlr.org/papers/v22/21-0017.html}
}
Top Related Projects
Fit interpretable models. Explain blackbox machine learning.
A game theoretic approach to explain the output of any machine learning model.
Lime: Explaining the predictions of any machine learning classifier
Interpretability and explainability of data and machine learning models
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot