Convert Figma logo to code with AI

Trusted-AI logoadversarial-robustness-toolbox

Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams

4,749
1,148
4,749
150

Top Related Projects

Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams

2,716

A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX

A Toolbox for Adversarial Robustness Research

An adversarial example library for constructing attacks, building defenses, and benchmarking both

Quick Overview

The Adversarial Robustness Toolbox (ART) is a Python library for Machine Learning Security. It provides tools to developers and researchers to defend and evaluate Machine Learning models and applications against adversarial threats. ART supports various ML frameworks, including TensorFlow, Keras, PyTorch, MXNet, scikit-learn, XGBoost, and more.

Pros

  • Comprehensive: Covers a wide range of adversarial attacks, defenses, and metrics
  • Framework-agnostic: Supports multiple popular ML frameworks
  • Actively maintained: Regular updates and contributions from the community
  • Well-documented: Extensive documentation and examples available

Cons

  • Learning curve: Can be complex for beginners due to the breadth of functionality
  • Performance: Some implementations may not be optimized for large-scale applications
  • Dependencies: Requires multiple dependencies, which can lead to potential conflicts

Code Examples

  1. Loading a pre-trained model and creating a classifier:
from art.estimators.classification import KerasClassifier
from tensorflow.keras.applications import ResNet50

model = ResNet50(weights='imagenet')
classifier = KerasClassifier(model=model, clip_values=(0, 255))
  1. Generating adversarial examples using Fast Gradient Sign Method (FGSM):
from art.attacks.evasion import FastGradientMethod

attack = FastGradientMethod(estimator=classifier, eps=0.1)
x_adv = attack.generate(x=x_test)
  1. Implementing Adversarial Training as a defense:
from art.defences.trainer import AdversarialTrainerMadryPGD

trainer = AdversarialTrainerMadryPGD(classifier, nb_epochs=10, batch_size=128)
trainer.fit(x_train, y_train)

Getting Started

To get started with ART, follow these steps:

  1. Install the library:
pip install adversarial-robustness-toolbox
  1. Import necessary modules:
from art.estimators.classification import KerasClassifier
from art.attacks.evasion import FastGradientMethod
from art.defences.trainer import AdversarialTrainerMadryPGD
  1. Load your model and create a classifier:
classifier = KerasClassifier(model=your_model, clip_values=(0, 255))
  1. Choose an attack or defense method and apply it to your model:
attack = FastGradientMethod(estimator=classifier, eps=0.1)
x_adv = attack.generate(x=x_test)

For more detailed instructions and examples, refer to the official documentation.

Competitor Comparisons

Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams

Pros of adversarial-robustness-toolbox

  • Comprehensive library for adversarial machine learning
  • Supports multiple deep learning frameworks (TensorFlow, Keras, PyTorch)
  • Actively maintained with regular updates and contributions

Cons of adversarial-robustness-toolbox

  • Learning curve can be steep for beginners
  • Documentation could be more extensive for some advanced features
  • May have performance overhead for large-scale applications

Code Comparison

Both repositories are the same, so there's no code comparison to make. However, here's a sample of how to use the library:

from art.attacks.evasion import FastGradientMethod
from art.estimators.classification import KerasClassifier

# Create a KerasClassifier
classifier = KerasClassifier(model=model, clip_values=(0, 1))

# Create an attack
attack = FastGradientMethod(classifier, eps=0.1)

# Generate adversarial examples
x_adv = attack.generate(x_test)

This code snippet demonstrates how to create an adversarial attack using the Fast Gradient Method on a Keras model.

2,716

A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX

Pros of Foolbox

  • Simpler API and easier to use for beginners
  • Faster execution for certain attack types
  • Better documentation and examples for quick start

Cons of Foolbox

  • Fewer attack types and defense methods compared to ART
  • Less focus on robustness evaluation and certification
  • Limited support for frameworks other than PyTorch and TensorFlow

Code Comparison

Foolbox:

import foolbox as fb
model = fb.PyTorchModel(net, bounds=(0, 1))
attack = fb.attacks.FGSM()
epsilons = [0.0, 0.001, 0.01, 0.03, 0.1, 0.3, 0.5, 1.0]
_, advs, success = attack(model, images, labels, epsilons=epsilons)

Adversarial Robustness Toolbox:

from art.attacks.evasion import FastGradientMethod
from art.estimators.classification import PyTorchClassifier
classifier = PyTorchClassifier(model=model, loss=criterion, input_shape=(3, 32, 32), nb_classes=10)
attack = FastGradientMethod(estimator=classifier, eps=0.3)
x_adv = attack.generate(x=x_test)

Both toolboxes offer similar functionality for generating adversarial examples, but Foolbox's API is more concise and intuitive for simple attacks. ART provides more comprehensive options and supports a wider range of frameworks and scenarios.

A Toolbox for Adversarial Robustness Research

Pros of advertorch

  • Focused specifically on adversarial machine learning in PyTorch
  • Simpler API and easier to use for PyTorch-based projects
  • More lightweight and faster for certain attack implementations

Cons of advertorch

  • Limited to PyTorch framework only
  • Fewer attack and defense methods compared to adversarial-robustness-toolbox
  • Less active development and community support

Code Comparison

advertorch:

from advertorch.attacks import PGDAttack
adversary = PGDAttack(model, loss_fn=nn.CrossEntropyLoss(), eps=0.3, nb_iter=40)
adv_examples = adversary.perturb(images, labels)

adversarial-robustness-toolbox:

from art.attacks.evasion import ProjectedGradientDescent
pgd = ProjectedGradientDescent(classifier, eps=0.3, max_iter=40)
adv_examples = pgd.generate(x=images)

Both libraries offer similar functionality for implementing adversarial attacks, but advertorch's API is more concise and PyTorch-specific. adversarial-robustness-toolbox provides a more framework-agnostic approach, supporting multiple deep learning libraries and offering a wider range of attack and defense methods.

An adversarial example library for constructing attacks, building defenses, and benchmarking both

Pros of cleverhans

  • Focused specifically on adversarial examples and attacks
  • Simpler API and easier to get started for beginners
  • Well-established in the research community with many citations

Cons of cleverhans

  • Less comprehensive in terms of defenses and robustness metrics
  • Not as actively maintained as adversarial-robustness-toolbox
  • Limited support for frameworks other than TensorFlow

Code Comparison

cleverhans:

from cleverhans.future.tf2.attacks import fast_gradient_method
adv_x = fast_gradient_method(model, x, eps=0.3, norm=np.inf)

adversarial-robustness-toolbox:

from art.attacks.evasion import FastGradientMethod
attack = FastGradientMethod(estimator=classifier, eps=0.3)
adv_x = attack.generate(x=x)

Both libraries offer similar functionality for generating adversarial examples, but adversarial-robustness-toolbox provides a more object-oriented approach with separate attack instantiation and generation steps. This allows for more flexibility and reusability in complex scenarios.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

Adversarial Robustness Toolbox (ART) v1.18


CodeQL Documentation Status PyPI codecov Code style: black License: MIT PyPI - Python Version slack-img Downloads Downloads CII Best Practices

中文README请按此处

LF AI & Data

Adversarial Robustness Toolbox (ART) is a Python library for Machine Learning Security. ART is hosted by the Linux Foundation AI & Data Foundation (LF AI & Data). ART provides tools that enable developers and researchers to defend and evaluate Machine Learning models and applications against the adversarial threats of Evasion, Poisoning, Extraction, and Inference. ART supports all popular machine learning frameworks (TensorFlow, Keras, PyTorch, MXNet, scikit-learn, XGBoost, LightGBM, CatBoost, GPy, etc.), all data types (images, tables, audio, video, etc.) and machine learning tasks (classification, object detection, speech recognition, generation, certification, etc.).

Adversarial Threats


ART for Red and Blue Teams (selection)


Learn more

Get StartedDocumentationContributing
- Installation
- Examples
- Notebooks
- Attacks
- Defences
- Estimators
- Metrics
- Technical Documentation
- Slack, Invitation
- Contributing
- Roadmap
- Citing

The library is under continuous development. Feedback, bug reports and contributions are very welcome!

Acknowledgment

This material is partially based upon work supported by the Defense Advanced Research Projects Agency (DARPA) under Contract No. HR001120C0013. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Defense Advanced Research Projects Agency (DARPA).