Top Related Projects
A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.
Fit interpretable models. Explain blackbox machine learning.
A game theoretic approach to explain the output of any machine learning model.
Lime: Explaining the predictions of any machine learning classifier
XAI - An eXplainability toolbox for machine learning
Quick Overview
AIX360 is an open-source library developed by IBM Research to help data scientists and developers explain and interpret machine learning models. It provides a comprehensive set of algorithms for interpretability and explainability, supporting various types of explanations for different model types and use cases.
Pros
- Offers a wide range of explainability algorithms for different model types (e.g., neural networks, tree-based models)
- Supports both global and local explanations
- Includes interactive visualization tools for better understanding of model behavior
- Well-documented with extensive tutorials and examples
Cons
- May have a steeper learning curve for beginners due to the variety of algorithms
- Some algorithms may be computationally expensive for large datasets
- Limited support for certain specialized model types
- Requires additional dependencies for some features
Code Examples
- Creating a ProtoDash explainer:
from aix360.algorithms.protodash import ProtodashExplainer
explainer = ProtodashExplainer()
(proto_indices, proto_weights) = explainer.explain(X_train, X_test)
- Generating a LIME explanation:
from aix360.algorithms.lime import LimeImageExplainer
explainer = LimeImageExplainer()
explanation = explainer.explain_instance(image, classifier_fn, top_labels=5, hide_color=0, num_samples=1000)
- Using CEM for contrastive explanations:
from aix360.algorithms.contrastive import CEMExplainer
explainer = CEMExplainer(model)
explanation = explainer.explain_instance(X_instance, Y_instance, num_iterations=1000, learning_rate=0.1)
Getting Started
To get started with AIX360, follow these steps:
- Install the library:
pip install aix360
- Import the necessary modules:
from aix360.algorithms.protodash import ProtodashExplainer
from aix360.datasets import heloc
- Load a dataset and create an explainer:
(X_train, y_train), (X_test, y_test) = heloc.load_data()
explainer = ProtodashExplainer()
- Generate explanations:
(proto_indices, proto_weights) = explainer.explain(X_train, X_test)
For more detailed instructions and examples, refer to the official documentation and tutorials on the AIX360 GitHub repository.
Competitor Comparisons
A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.
Pros of AIF360
- Focuses specifically on fairness in machine learning, offering a comprehensive set of bias mitigation algorithms
- Provides a consistent API for fairness metrics and algorithms, making it easier to integrate into existing ML pipelines
- Includes educational resources and tutorials on AI fairness concepts
Cons of AIF360
- More limited in scope compared to AIX360, as it primarily addresses fairness issues
- May require additional tools or libraries for broader explainability tasks beyond fairness
Code Comparison
AIF360:
from aif360.datasets import BinaryLabelDataset
from aif360.metrics import BinaryLabelDatasetMetric
dataset = BinaryLabelDataset(...)
metric = BinaryLabelDatasetMetric(dataset, unprivileged_groups, privileged_groups)
AIX360:
from aix360.datasets import MNISTDataset
from aix360.algorithms.protodash import ProtodashExplainer
dataset = MNISTDataset()
explainer = ProtodashExplainer()
explanation = explainer.explain(...)
Both repositories are part of the Trusted AI toolkit, with AIF360 focusing on fairness in AI and AIX360 offering a broader range of explainability tools. AIF360 is more specialized for addressing bias and fairness issues, while AIX360 provides a wider array of explainability techniques for various AI models and use cases.
Fit interpretable models. Explain blackbox machine learning.
Pros of interpret
- More active development with frequent updates and contributions
- Broader range of interpretability techniques, including global and local explanations
- Easier integration with popular ML frameworks like scikit-learn and TensorFlow
Cons of interpret
- Less focus on fairness and bias detection compared to AIX360
- May have a steeper learning curve for beginners due to its extensive feature set
- Documentation could be more comprehensive for some advanced features
Code Comparison
interpret:
from interpret import show
from interpret.glassbox import ExplainableBoostingClassifier
ebm = ExplainableBoostingClassifier()
ebm.fit(X_train, y_train)
show(ebm.explain_global())
AIX360:
from aix360.algorithms.protodash import ProtodashExplainer
explainer = ProtodashExplainer()
proto_indices, proto_weights = explainer.explain(X_train, X_test)
print(proto_indices, proto_weights)
Both libraries offer tools for model interpretability, but they differ in their approach and focus areas. interpret provides a wider range of techniques and integrates well with popular ML frameworks, while AIX360 has a stronger emphasis on fairness and bias detection in AI systems.
A game theoretic approach to explain the output of any machine learning model.
Pros of shap
- More widely adopted and actively maintained
- Supports a broader range of machine learning models
- Provides both global and local explanations
Cons of shap
- Can be computationally expensive for large datasets
- Requires more setup and configuration for complex models
- May produce less intuitive explanations for certain types of models
Code Comparison
shap:
import shap
explainer = shap.TreeExplainer(model)
shap_values = explainer.shap_values(X)
shap.summary_plot(shap_values, X)
AIX360:
from aix360.algorithms.protodash import ProtodashExplainer
explainer = ProtodashExplainer()
explanation = explainer.explain(X, y, X_test)
Both libraries aim to provide explainable AI solutions, but shap focuses on SHAP (SHapley Additive exPlanations) values, while AIX360 offers a broader range of explainability algorithms. shap is more popular and versatile, but AIX360 provides a more comprehensive toolkit for various explainability techniques. The code examples demonstrate the simplicity of using shap for tree-based models, while AIX360 requires more specific algorithm selection and setup.
Lime: Explaining the predictions of any machine learning classifier
Pros of LIME
- Simpler and more focused on a single interpretability technique
- Easier to integrate into existing machine learning workflows
- More widely adopted and cited in academic literature
Cons of LIME
- Limited to local interpretability, unlike AIX360's comprehensive toolkit
- Less support for fairness and bias detection compared to AIX360
- Fewer built-in visualization tools for explaining model decisions
Code Comparison
LIME example:
from lime import lime_tabular
explainer = lime_tabular.LimeTabularExplainer(X_train)
exp = explainer.explain_instance(X_test[0], clf.predict_proba)
AIX360 example:
from aix360.algorithms.protodash import ProtodashExplainer
explainer = ProtodashExplainer()
explanation = explainer.explain(X_train, X_test[0], m=5)
Both libraries offer ways to explain model predictions, but AIX360 provides a wider range of explainability algorithms and techniques. LIME focuses on local interpretable model-agnostic explanations, while AIX360 offers a more comprehensive suite of tools for various aspects of AI explainability, including fairness and bias detection.
XAI - An eXplainability toolbox for machine learning
Pros of xai
- Lightweight and easy to integrate into existing ML workflows
- Focuses on practical, industry-oriented explainability techniques
- Provides a comprehensive set of visualization tools for model interpretability
Cons of xai
- Less extensive documentation compared to AIX360
- Smaller community and fewer contributors
- Limited support for certain advanced AI fairness metrics
Code Comparison
xai:
from xai import XAITabular
xai = XAITabular(model, X_train, feature_names=feature_names)
xai.plot.feature_importance()
AIX360:
from aix360.algorithms.protodash import ProtodashExplainer
explainer = ProtodashExplainer()
explanation = explainer.explain(X_train, X_test, K=5)
Both libraries offer tools for explainable AI, but xai tends to be more focused on practical applications and visualization, while AIX360 provides a broader range of algorithms and metrics for fairness and interpretability. xai is generally easier to integrate into existing workflows, while AIX360 offers more comprehensive documentation and a larger community. The code examples demonstrate the simplicity of xai's API compared to the more detailed approach of AIX360.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
AI Explainability 360 (v0.3.0)
The AI Explainability 360 toolkit is an open-source library that supports interpretability and explainability of datasets and machine learning models. The AI Explainability 360 Python package includes a comprehensive set of algorithms that cover different dimensions of explanations along with proxy explainability metrics. The AI Explainability 360 toolkit supports tabular, text, images, and time series data.
The AI Explainability 360 interactive experience provides a gentle introduction to the concepts and capabilities by walking through an example use case for different consumer personas. The tutorials and example notebooks offer a deeper, data scientist-oriented introduction. The complete API is also available.
There is no single approach to explainability that works best. There are many ways to explain: data vs. model, directly interpretable vs. post hoc explanation, local vs. global, etc. It may therefore be confusing to figure out which algorithms are most appropriate for a given use case. To help, we have created some guidance material and a taxonomy tree that can be consulted.
We have developed the package with extensibility in mind. This library is still in development. We encourage you to contribute your explainability algorithms, metrics, and use cases. To get started as a contributor, please join the AI Explainability 360 Community on Slack by requesting an invitation here. Please review the instructions to contribute code and python notebooks here.
Supported explainability algorithms
Data explanations
- ProtoDash (Gurumoorthy et al., 2019)
- Disentangled Inferred Prior VAE (Kumar et al., 2018)
Local post-hoc explanations
- ProtoDash (Gurumoorthy et al., 2019)
- Contrastive Explanations Method (Dhurandhar et al., 2018)
- Contrastive Explanations Method with Monotonic Attribute Functions (Luss et al., 2019)
- Exemplar based Contrastive Explanations Method
- Grouped Conditional Expectation (Adaptation of Individual Conditional Expectation Plots by Goldstein et al. to higher dimension )
- LIME (Ribeiro et al. 2016, Github)
- SHAP (Lundberg, et al. 2017, Github)
Time-Series local post-hoc explanations
- Time Series Saliency Maps using Integrated Gradients (Inspired by Sundararajan et al. )
- Time Series LIME (Time series adaptation of the classic paper by Ribeiro et al. 2016 )
- Time Series Individual Conditional Expectation (Time series adaptation of Individual Conditional Expectation Plots Goldstein et al. )
Local direct explanations
- Teaching AI to Explain its Decisions (Hind et al., 2019)
- Order Constraints in Optimal Transport (Lim et al.,2022, Github)
Certifying local explanations
- Trust Regions for Explanations via Black-Box Probabilistic Certification (Ecertify) (Dhurandhar et al., 2024)
Global direct explanations
- Interpretable Model Differencing (IMD) (Haldar et al., 2023)
- CoFrNets (Continued Fraction Nets) (Puri et al., 2021)
- Boolean Decision Rules via Column Generation (Light Edition) (Dash et al., 2018)
- Generalized Linear Rule Models (Wei et al., 2019)
- Fast Effective Rule Induction (Ripper) (William W Cohen, 1995)
Global post-hoc explanations
- ProfWeight (Dhurandhar et al., 2018)
Supported explainability metrics
- Faithfulness (Alvarez-Melis and Jaakkola, 2018)
- Monotonicity (Luss et al., 2019)
Setup
Supported Configurations:
Installation keyword | Explainer(s) | OS | Python version |
---|---|---|---|
cofrnet | cofrnet | macOS, Ubuntu, Windows | 3.10 |
contrastive | cem, cem_maf | macOS, Ubuntu, Windows | 3.6 |
dipvae | dipvae | macOS, Ubuntu, Windows | 3.10 |
gce | gce | macOS, Ubuntu, Windows | 3.10 |
ecertify | ecertify | macOS, Ubuntu, Windows | 3.10 |
imd | imd | macOS, Ubuntu | 3.10 |
lime | lime | macOS, Ubuntu, Windows | 3.10 |
matching | matching | macOS, Ubuntu, Windows | 3.10 |
nncontrastive | nncontrastive | macOS, Ubuntu, Windows | 3.10 |
profwt | profwt | macOS, Ubuntu, Windows | 3.6 |
protodash | protodash | macOS, Ubuntu, Windows | 3.10 |
rbm | brcg, glrm | macOS, Ubuntu, Windows | 3.10 |
rule_induction | ripper | macOS, Ubuntu, Windows | 3.10 |
shap | shap | macOS, Ubuntu, Windows | 3.6 |
ted | ted | macOS, Ubuntu, Windows | 3.10 |
tsice | tsice | macOS, Ubuntu, Windows | 3.10 |
tslime | tslime | macOS, Ubuntu, Windows | 3.10 |
tssaliency | tssaliency | macOS, Ubuntu, Windows | 3.10 |
(Optional) Create a virtual environment
AI Explainability 360 requires specific versions of many Python packages which may conflict with other projects on your system. A virtual environment manager is strongly recommended to ensure dependencies may be installed safely. If you have trouble installing the toolkit, try this first.
Conda
Conda is recommended for all configurations though Virtualenv is generally interchangeable for our purposes. Miniconda is sufficient (see the difference between Anaconda and Miniconda if you are curious) and can be installed from here if you do not already have it.
Then, create a new python environment based on the explainability algorithms you wish to use by referring to the table above. For example, for python 3.10, use the following command:
conda create --name aix360 python=3.10
conda activate aix360
The shell should now look like (aix360) $
. To deactivate the environment, run:
(aix360)$ conda deactivate
The prompt will return back to $
or (base)$
.
Note: Older versions of conda may use source activate aix360
and source deactivate
(activate aix360
and deactivate
on Windows).
Installation
Clone the latest version of this repository:
(aix360)$ git clone https://github.com/Trusted-AI/AIX360
If you'd like to run the examples and tutorial notebooks, download the datasets now and place them in their respective folders as described in aix360/data/README.md.
Then, navigate to the root directory of the project which contains setup.py
file and run:
(aix360)$ pip install -e .[<algo1>,<algo2>, ...]
The above command installs packages required by specific algorithms. Here <algo>
refers to the installation keyword in table above. For instance to install packages needed by BRCG, DIPVAE, and TSICE algorithms, one could use
(aix360)$ pip install -e .[rbm,dipvae,tsice]
The default command pip install .
installs default dependencies alone.
Note that you may not be able to install two algorithms that require different versions of python in the same environment (for instance contrastive
along with rbm
).
If you face any issues, please try upgrading pip and setuptools and uninstall any previous versions of aix360 before attempting the above step again.
(aix360)$ pip install --upgrade pip setuptools
(aix360)$ pip uninstall aix360
PIP Installation of AI Explainability 360
If you would like to quickly start using the AI explainability 360 toolkit without explicitly cloning this repository, you can use one of these options:
- Install v0.3.0 via repository link
(your environment)$ pip install -e git+https://github.com/Trusted-AI/AIX360.git#egg=aix360[<algo1>,<algo2>,...]
For example, use pip install -e git+https://github.com/Trusted-AI/AIX360.git#egg=aix360[rbm,dipvae,tsice]
to install BRCG, DIPVAE, and TSICE. You may need to install cmake
if its not already installed in your environment using conda install cmake
.
- Install v0.3.0 (or previous versions) via pypi
(your environment)$ pip install aix360
If you follow either of these two options, you will need to download the notebooks available in the examples folder separately.
Dealing with installation errors
AI Explainability 360 toolkit is tested on Windows, MacOS, and Linux. However, if you still face installation issues due to package dependencies, please try installing the corresponding package via conda (e.g. conda install package-name) and then install the toolkit by following the usual steps. For example, if you face issues related to pygraphviz during installation, use conda install pygraphviz
and then install the toolkit.
Please use the right python environment based on the table above.
Running in Docker
- Under
AIX360
directory build the container image from Dockerfile usingdocker build -t aix360_docker .
- Start the container image using command
docker run -it -p 8888:8888 aix360_docker:latest bash
assuming port 8888 is free on your machine. - Inside the container start jupuyter lab using command
jupyter lab --allow-root --ip 0.0.0.0 --port 8888 --no-browser
- Access the sample tutorials on your machine using URL
localhost:8888
Using AI Explainability 360
The examples
directory contains a diverse collection of jupyter notebooks
that use AI Explainability 360 in various ways. Both examples and tutorial notebooks illustrate
working code using the toolkit. Tutorials provide additional discussion that walks
the user through the various steps of the notebook. See the details about
tutorials and examples here.
Citing AI Explainability 360
If you are using AI Explainability 360 for your work, we encourage you to
- Cite the following paper. The bibtex entry is as follows:
@misc{aix360-sept-2019,
title = "One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques",
author = {Vijay Arya and Rachel K. E. Bellamy and Pin-Yu Chen and Amit Dhurandhar and Michael Hind
and Samuel C. Hoffman and Stephanie Houde and Q. Vera Liao and Ronny Luss and Aleksandra Mojsilovi\'c
and Sami Mourad and Pablo Pedemonte and Ramya Raghavendra and John Richards and Prasanna Sattigeri
and Karthikeyan Shanmugam and Moninder Singh and Kush R. Varshney and Dennis Wei and Yunfeng Zhang},
month = sept,
year = {2019},
url = {https://arxiv.org/abs/1909.03012}
}
-
Put a star on this repository.
-
Share your success stories with us and others in the AI Explainability 360 Community.
AIX360 Videos
- Introductory video to AI Explainability 360 by Vijay Arya and Amit Dhurandhar, September 5, 2019 (35 mins)
Acknowledgements
AIX360 is built with the help of several open source packages. All of these are listed in setup.py and some of these include:
- Tensorflow https://www.tensorflow.org/about/bib
- Pytorch https://github.com/pytorch/pytorch
- scikit-learn https://scikit-learn.org/stable/about.html
License Information
Please view both the LICENSE file and the folder supplementary license present in the root directory for license information.
Top Related Projects
A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.
Fit interpretable models. Explain blackbox machine learning.
A game theoretic approach to explain the output of any machine learning model.
Lime: Explaining the predictions of any machine learning classifier
XAI - An eXplainability toolbox for machine learning
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot