Convert Figma logo to code with AI

Trusted-AI logoAIF360

A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.

2,404
827
2,404
199

Top Related Projects

A Python package to assess and improve fairness of machine learning models.

Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libraries empower developers and stakeholders of AI systems to develop and monitor AI more responsibly, and take better data-driven actions.

2,371

Algorithms for explaining machine learning models

Quick Overview

AIF360 (AI Fairness 360) is an open-source toolkit developed by IBM Research to help detect and mitigate bias in machine learning models and datasets. It provides a comprehensive set of metrics for measuring bias and algorithms for mitigating bias, along with educational resources and real-world use cases.

Pros

  • Comprehensive suite of fairness metrics and mitigation algorithms
  • Supports multiple programming languages (Python, R, and NodeJS)
  • Includes detailed tutorials and real-world use cases
  • Actively maintained and supported by IBM Research

Cons

  • Steep learning curve for users new to fairness in AI
  • Limited integration with some popular machine learning frameworks
  • Performance can be slow for large datasets
  • Documentation can be overwhelming for beginners

Code Examples

  1. Loading a dataset and computing fairness metrics:
from aif360.datasets import BinaryLabelDataset
from aif360.metrics import BinaryLabelDatasetMetric

dataset = BinaryLabelDataset.from_pandas(your_dataframe, label_name='target', protected_attribute_names=['race'])
metric = BinaryLabelDatasetMetric(dataset, unprivileged_groups=[{'race': 0}], privileged_groups=[{'race': 1}])
print(metric.mean_difference())
  1. Applying a bias mitigation algorithm:
from aif360.algorithms.preprocessing import Reweighing

rw = Reweighing(unprivileged_groups=[{'race': 0}], privileged_groups=[{'race': 1}])
dataset_transformed = rw.fit_transform(dataset)
  1. Evaluating a classifier for fairness:
from aif360.metrics import ClassificationMetric
from sklearn.model_selection import train_test_split

train, test = train_test_split(dataset, test_size=0.3, random_state=42)
clf.fit(train.features, train.labels)
pred = clf.predict(test.features)

metric = ClassificationMetric(test, pred, unprivileged_groups=[{'race': 0}], privileged_groups=[{'race': 1}])
print(metric.equal_opportunity_difference())

Getting Started

To get started with AIF360, follow these steps:

  1. Install the library:
pip install aif360
  1. Import necessary modules:
from aif360.datasets import BinaryLabelDataset
from aif360.metrics import BinaryLabelDatasetMetric
from aif360.algorithms.preprocessing import Reweighing
  1. Load your dataset and create a BinaryLabelDataset object:
dataset = BinaryLabelDataset.from_pandas(your_dataframe, label_name='target', protected_attribute_names=['race'])
  1. Compute fairness metrics and apply bias mitigation algorithms as needed.

Competitor Comparisons

A Python package to assess and improve fairness of machine learning models.

Pros of fairlearn

  • More focused on integrating fairness metrics into existing machine learning workflows
  • Provides post-processing techniques for model outputs
  • Offers a wider range of fairness metrics and constraints

Cons of fairlearn

  • Less comprehensive in bias mitigation algorithms compared to AIF360
  • Primarily supports scikit-learn models, limiting its applicability to other frameworks
  • Lacks some of the more advanced preprocessing techniques found in AIF360

Code Comparison

fairlearn example:

from fairlearn.metrics import demographic_parity_difference
from fairlearn.reductions import DemographicParity

constraint = DemographicParity()
mitigator = ExponentiatedGradient(estimator, constraint)
mitigator.fit(X, y, sensitive_features=A)

AIF360 example:

from aif360.algorithms.preprocessing import Reweighing
from aif360.datasets import BinaryLabelDataset

dataset = BinaryLabelDataset(df=df, label_name='label', protected_attribute_names=['race'])
reweighing = Reweighing(unprivileged_groups, privileged_groups)
dataset_transformed = reweighing.fit_transform(dataset)

Both libraries offer tools for fairness in machine learning, but they differ in their approach and focus. fairlearn is more integrated with existing ML workflows, while AIF360 provides a more comprehensive set of bias mitigation techniques.

Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libraries empower developers and stakeholders of AI systems to develop and monitor AI more responsibly, and take better data-driven actions.

Pros of responsible-ai-toolbox

  • More comprehensive suite of tools for responsible AI, including interpretability, fairness, and error analysis
  • Better integration with popular ML frameworks like scikit-learn, PyTorch, and TensorFlow
  • More active development and frequent updates

Cons of responsible-ai-toolbox

  • Steeper learning curve due to broader scope and more complex API
  • Less focus on specific fairness metrics and algorithms compared to AIF360
  • Requires more setup and configuration for basic use cases

Code Comparison

AIF360:

from aif360.datasets import BinaryLabelDataset
from aif360.metrics import BinaryLabelDatasetMetric

dataset = BinaryLabelDataset(...)
metric = BinaryLabelDatasetMetric(dataset, unprivileged_groups, privileged_groups)
print(metric.disparate_impact())

responsible-ai-toolbox:

from raiwidgets import FairnessDashboard
from responsibleai import RAIInsights

rai_insights = RAIInsights(model, train, test, target_column, task_type='classification')
rai_insights.compute()
FairnessDashboard(rai_insights)

Both repositories focus on fairness in AI, but responsible-ai-toolbox offers a broader set of tools and better integration with popular ML frameworks. However, AIF360 provides a more straightforward API for specific fairness metrics and algorithms. The choice between them depends on the project's requirements and the user's familiarity with responsible AI concepts.

2,371

Algorithms for explaining machine learning models

Pros of Alibi

  • Broader focus on model interpretability and explainability, not just fairness
  • More active development and frequent updates
  • Supports a wider range of machine learning frameworks (TensorFlow, Keras, PyTorch)

Cons of Alibi

  • Less comprehensive fairness metrics and mitigation techniques
  • Steeper learning curve for users new to explainable AI concepts
  • Smaller community and fewer educational resources compared to AIF360

Code Comparison

Alibi (Anchor explanations):

from alibi.explainers import AnchorTabular
explainer = AnchorTabular(predict_fn, feature_names)
explanation = explainer.explain(X)

AIF360 (Reweighing):

from aif360.algorithms.preprocessing import Reweighing
rw = Reweighing(unprivileged_groups, privileged_groups)
dataset_transf = rw.fit_transform(dataset)

Both libraries offer unique approaches to addressing AI fairness and explainability. Alibi provides a broader scope of interpretability techniques, while AIF360 focuses more specifically on fairness metrics and mitigation strategies. The choice between the two depends on the specific requirements of your project and the depth of fairness analysis needed.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

AI Fairness 360 (AIF360)

Continuous Integration Documentation PyPI version CRAN_Status_Badge

The AI Fairness 360 toolkit is an extensible open-source library containing techniques developed by the research community to help detect and mitigate bias in machine learning models throughout the AI application lifecycle. AI Fairness 360 package is available in both Python and R.

The AI Fairness 360 package includes

  1. a comprehensive set of metrics for datasets and models to test for biases,
  2. explanations for these metrics, and
  3. algorithms to mitigate bias in datasets and models. It is designed to translate algorithmic research from the lab into the actual practice of domains as wide-ranging as finance, human capital management, healthcare, and education. We invite you to use it and improve it.

The AI Fairness 360 interactive experience provides a gentle introduction to the concepts and capabilities. The tutorials and other notebooks offer a deeper, data scientist-oriented introduction. The complete API is also available.

Being a comprehensive set of capabilities, it may be confusing to figure out which metrics and algorithms are most appropriate for a given use case. To help, we have created some guidance material that can be consulted.

We have developed the package with extensibility in mind. This library is still in development. We encourage the contribution of your metrics, explainers, and debiasing algorithms.

Get in touch with us on Slack (invitation here)!

Supported bias mitigation algorithms

Supported fairness metrics

  • Comprehensive set of group fairness metrics derived from selection rates and error rates including rich subgroup fairness
  • Comprehensive set of sample distortion metrics
  • Generalized Entropy Index (Speicher et al., 2018)
  • Differential Fairness and Bias Amplification (Foulds et al., 2018)
  • Bias Scan with Multi-Dimensional Subset Scan (Zhang, Neill, 2017)

Setup

R

install.packages("aif360")

For more details regarding the R setup, please refer to instructions here.

Python

Supported Python Configurations:

OSPython version
macOS3.8 – 3.11
Ubuntu3.8 – 3.11
Windows3.8 – 3.11

(Optional) Create a virtual environment

AIF360 requires specific versions of many Python packages which may conflict with other projects on your system. A virtual environment manager is strongly recommended to ensure dependencies may be installed safely. If you have trouble installing AIF360, try this first.

Conda

Conda is recommended for all configurations though Virtualenv is generally interchangeable for our purposes. Miniconda is sufficient (see the difference between Anaconda and Miniconda if you are curious) if you do not already have conda installed.

Then, to create a new Python 3.11 environment, run:

conda create --name aif360 python=3.11
conda activate aif360

The shell should now look like (aif360) $. To deactivate the environment, run:

(aif360)$ conda deactivate

The prompt will return to $ .

Install with pip

To install the latest stable version from PyPI, run:

pip install aif360

Note: Some algorithms require additional dependencies (although the metrics will all work out-of-the-box). To install with certain algorithm dependencies included, run, e.g.:

pip install 'aif360[LFR,OptimPreproc]'

or, for complete functionality, run:

pip install 'aif360[all]'

The options for available extras are: OptimPreproc, LFR, AdversarialDebiasing, DisparateImpactRemover, LIME, ART, Reductions, FairAdapt, inFairness, LawSchoolGPA, notebooks, tests, docs, all

If you encounter any errors, try the Troubleshooting steps.

Manual installation

Clone the latest version of this repository:

git clone https://github.com/Trusted-AI/AIF360

If you'd like to run the examples, download the datasets now and place them in their respective folders as described in aif360/data/README.md.

Then, navigate to the root directory of the project and run:

pip install --editable '.[all]'

Run the Examples

To run the example notebooks, complete the manual installation steps above. Then, if you did not use the [all] option, install the additional requirements as follows:

pip install -e '.[notebooks]'

Finally, if you did not already, download the datasets as described in aif360/data/README.md.

Troubleshooting

If you encounter any errors during the installation process, look for your issue here and try the solutions.

TensorFlow

See the Install TensorFlow with pip page for detailed instructions.

Note: we require 'tensorflow >= 1.13.1'.

Once tensorflow is installed, try re-running:

pip install 'aif360[AdversarialDebiasing]'

TensorFlow is only required for use with the aif360.algorithms.inprocessing.AdversarialDebiasing class.

CVXPY

On MacOS, you may first have to install the Xcode Command Line Tools if you never have previously:

xcode-select --install

On Windows, you may need to download the Microsoft C++ Build Tools for Visual Studio 2019. See the CVXPY Install page for up-to-date instructions.

Then, try reinstalling via:

pip install 'aif360[OptimPreproc]'

CVXPY is only required for use with the aif360.algorithms.preprocessing.OptimPreproc class.

Using AIF360

The examples directory contains a diverse collection of jupyter notebooks that use AI Fairness 360 in various ways. Both tutorials and demos illustrate working code using AIF360. Tutorials provide additional discussion that walks the user through the various steps of the notebook. See the details about tutorials and demos here

Citing AIF360

A technical description of AI Fairness 360 is available in this paper. Below is the bibtex entry for this paper.

@misc{aif360-oct-2018,
    title = "{AI Fairness} 360:  An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias",
    author = {Rachel K. E. Bellamy and Kuntal Dey and Michael Hind and
	Samuel C. Hoffman and Stephanie Houde and Kalapriya Kannan and
	Pranay Lohia and Jacquelyn Martino and Sameep Mehta and
	Aleksandra Mojsilovic and Seema Nagar and Karthikeyan Natesan Ramamurthy and
	John Richards and Diptikalyan Saha and Prasanna Sattigeri and
	Moninder Singh and Kush R. Varshney and Yunfeng Zhang},
    month = oct,
    year = {2018},
    url = {https://arxiv.org/abs/1810.01943}
}

AIF360 Videos

  • Introductory video to AI Fairness 360 by Kush Varshney, September 20, 2018 (32 mins)

Contributing

The development fork for Rich Subgroup Fairness (inprocessing/gerryfair_classifier.py) is here. Contributions are welcome and a list of potential contributions from the authors can be found here.