foolbox
A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX
Top Related Projects
An adversarial example library for constructing attacks, building defenses, and benchmarking both
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
A Toolbox for Adversarial Robustness Research
Quick Overview
Foolbox is a Python toolbox for creating and evaluating adversarial examples in machine learning models. It provides a comprehensive set of attack algorithms and benchmarking capabilities, making it easier for researchers and practitioners to assess the robustness of their models against various adversarial threats.
Pros
- Wide range of attack algorithms: Supports numerous state-of-the-art adversarial attack methods
- Framework-agnostic: Compatible with popular deep learning frameworks like PyTorch, TensorFlow, and JAX
- Extensible: Allows easy implementation of custom attacks and models
- Comprehensive documentation and examples
Cons
- Learning curve: May require some time to understand the concepts and API
- Performance: Some attacks can be computationally expensive, especially on large datasets
- Limited to adversarial examples: Focuses solely on adversarial attacks, not other aspects of model robustness
Code Examples
- Creating an adversarial example using the Fast Gradient Sign Method (FGSM):
import foolbox as fb
import torch
model = fb.PyTorchModel(torch_model, bounds=(0, 1))
attack = fb.attacks.FGSM()
images, labels = load_dataset()
_, advs, success = attack(model, images, labels, epsilons=0.03)
- Evaluating model robustness against multiple attacks:
import foolbox as fb
model = fb.PyTorchModel(torch_model, bounds=(0, 1))
attacks = [
fb.attacks.FGSM(),
fb.attacks.PGD(),
fb.attacks.DeepFoolLinfinityAttack(),
]
epsilons = [0.0, 0.001, 0.01, 0.03, 0.1, 0.3, 0.5, 1.0]
_, robust_accuracy = fb.utils.accuracy(model, images, labels, epsilons=epsilons, attack=attacks)
- Implementing a custom attack:
import foolbox as fb
class MyCustomAttack(fb.attacks.base.Attack):
def run(self, model, inputs, criterion, *, epsilon, **kwargs):
# Implement your custom attack logic here
return adversarial_examples
attack = MyCustomAttack()
advs = attack(model, images, labels, epsilon=0.1)
Getting Started
To get started with Foolbox, install it using pip:
pip install foolbox
Then, import the library and create a model wrapper:
import foolbox as fb
import torch
# Assuming you have a PyTorch model
torch_model = YourPyTorchModel()
model = fb.PyTorchModel(torch_model, bounds=(0, 1))
# Load your dataset
images, labels = load_dataset()
# Choose an attack
attack = fb.attacks.PGD()
# Run the attack
_, advs, success = attack(model, images, labels, epsilons=0.03)
This basic example demonstrates how to set up a model, choose an attack, and generate adversarial examples using Foolbox.
Competitor Comparisons
An adversarial example library for constructing attacks, building defenses, and benchmarking both
Pros of cleverhans
- More comprehensive set of adversarial attacks and defenses
- Better integration with TensorFlow and Keras
- Extensive documentation and tutorials
Cons of cleverhans
- Less support for PyTorch models
- Steeper learning curve for beginners
- Less frequent updates compared to Foolbox
Code Comparison
cleverhans example:
from cleverhans.attacks import FastGradientMethod
from cleverhans.utils_keras import KerasModelWrapper
model_wrap = KerasModelWrapper(model)
fgsm = FastGradientMethod(model_wrap, sess=sess)
adv_x = fgsm.generate(x, **fgsm_params)
Foolbox example:
import foolbox as fb
fmodel = fb.PyTorchModel(model, bounds=(0, 1))
attack = fb.attacks.FGSM()
_, adv_x, _ = attack(fmodel, images, labels, epsilons=0.03)
Both libraries provide similar functionality for generating adversarial examples, but cleverhans is more tightly integrated with TensorFlow and Keras, while Foolbox offers a more straightforward API and better support for PyTorch models. cleverhans provides a wider range of attacks and defenses, making it suitable for more advanced research, while Foolbox is generally easier to use for beginners and offers more frequent updates.
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
Pros of adversarial-robustness-toolbox
- Broader scope, covering various aspects of AI security beyond just adversarial attacks
- Supports multiple deep learning frameworks (TensorFlow, Keras, PyTorch)
- More extensive documentation and tutorials
Cons of adversarial-robustness-toolbox
- Steeper learning curve due to its comprehensive nature
- May be overkill for projects focused solely on adversarial attacks
Code Comparison
foolbox:
import foolbox as fb
model = fb.models.PyTorchModel(net, bounds=(0, 1))
attack = fb.attacks.FGSM()
epsilons = [0.0, 0.001, 0.01, 0.03, 0.1, 0.3, 0.5, 1.0]
_, advs, success = attack(model, images, labels, epsilons=epsilons)
adversarial-robustness-toolbox:
from art.attacks.evasion import FastGradientMethod
from art.estimators.classification import PyTorchClassifier
classifier = PyTorchClassifier(model=model, loss=criterion, input_shape=(3, 32, 32), nb_classes=10)
attack = FastGradientMethod(estimator=classifier, eps=0.3)
x_test_adv = attack.generate(x=x_test)
Both libraries offer similar functionality for generating adversarial examples, but adversarial-robustness-toolbox provides a more comprehensive set of tools for AI security beyond just adversarial attacks.
A Toolbox for Adversarial Robustness Research
Pros of advertorch
- More comprehensive set of adversarial attacks and defenses
- Better integration with PyTorch ecosystem
- More active development and frequent updates
Cons of advertorch
- Steeper learning curve for beginners
- Less focus on model-agnostic attacks
- Fewer built-in visualization tools
Code Comparison
advertorch:
from advertorch.attacks import PGDAttack
adversary = PGDAttack(model, loss_fn=nn.CrossEntropyLoss(), eps=0.3,
nb_iter=40, eps_iter=0.01, rand_init=True)
adv_examples = adversary.perturb(images, labels)
foolbox:
from foolbox import attacks
fmodel = foolbox.PyTorchModel(model, bounds=(0, 1))
attack = attacks.PGD()
adv_examples, _, success = attack(fmodel, images, labels, epsilons=0.3)
Both libraries offer similar functionality for implementing adversarial attacks, but advertorch provides a more PyTorch-centric approach with additional parameters for fine-tuning the attack. foolbox, on the other hand, offers a more model-agnostic interface that can work with different deep learning frameworks.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
.. raw:: html
.. image:: https://badge.fury.io/py/foolbox.svg :target: https://badge.fury.io/py/foolbox
.. image:: https://readthedocs.org/projects/foolbox/badge/?version=latest :target: https://foolbox.readthedocs.io/en/latest/
.. image:: https://img.shields.io/badge/code%20style-black-000000.svg :target: https://github.com/ambv/black
.. image:: https://joss.theoj.org/papers/10.21105/joss.02607/status.svg :target: https://doi.org/10.21105/joss.02607
=============================================================================================================================== Foolbox: Fast adversarial attacks to benchmark the robustness of machine learning models in PyTorch, TensorFlow, and JAX
Foolbox <https://foolbox.jonasrauber.de>
_ is a Python library that lets you easily run adversarial attacks against machine learning models like deep neural networks. It is built on top of EagerPy and works natively with models in PyTorch <https://pytorch.org>
, TensorFlow <https://www.tensorflow.org>
, and JAX <https://github.com/google/jax>
_.
ð¥ Design
Foolbox 3 has been rewritten from scratch
using EagerPy <https://github.com/jonasrauber/eagerpy>
_ instead of
NumPy to achieve native performance on models
developed in PyTorch, TensorFlow and JAX, all with one code base without code duplication.
- Native Performance: Foolbox 3 is built on top of EagerPy and runs natively in PyTorch, TensorFlow, and JAX and comes with real batch support.
- State-of-the-art attacks: Foolbox provides a large collection of state-of-the-art gradient-based and decision-based adversarial attacks.
- Type Checking: Catch bugs before running your code thanks to extensive type annotations in Foolbox.
ð Documentation
- Guide: The best place to get started with Foolbox is the official
guide <https://foolbox.jonasrauber.de>
_. - Tutorial: If you are looking for a tutorial, check out this
Jupyter notebook <https://github.com/jonasrauber/foolbox-native-tutorial/blob/master/foolbox-native-tutorial.ipynb>
_ |colab|. - Documentation: The API documentation can be found on
ReadTheDocs <https://foolbox.readthedocs.io/en/stable/>
_.
.. |colab| image:: https://colab.research.google.com/assets/colab-badge.svg :target: https://colab.research.google.com/github/jonasrauber/foolbox-native-tutorial/blob/master/foolbox-native-tutorial.ipynb
ð Quickstart
.. code-block:: bash
pip install foolbox
Foolbox is tested with Python 3.8 and newer - however, it will most likely also work with version 3.6 - 3.8. To use it with PyTorch <https://pytorch.org>
, TensorFlow <https://www.tensorflow.org>
, or JAX <https://github.com/google/jax>
_, the respective framework needs to be installed separately. These frameworks are not declared as dependencies because not everyone wants to use and thus install all of them and because some of these packages have different builds for different architectures and CUDA versions. Besides that, all essential dependencies are automatically installed.
You can see the versions we currently use for testing in the Compatibility section <#-compatibility>
_ below, but newer versions are in general expected to work.
ð Example
.. code-block:: python
import foolbox as fb
model = ... fmodel = fb.PyTorchModel(model, bounds=(0, 1))
attack = fb.attacks.LinfPGD() epsilons = [0.0, 0.001, 0.01, 0.03, 0.1, 0.3, 0.5, 1.0] _, advs, success = attack(fmodel, images, labels, epsilons=epsilons)
More examples can be found in the examples <./examples/>
_ folder, e.g.
a full ResNet-18 example <./examples/single_attack_pytorch_resnet18.py>
_.
ð Citation
If you use Foolbox for your work, please cite our JOSS paper on Foolbox Native (i.e., Foolbox 3.0) <https://doi.org/10.21105/joss.02607>
_ and our ICML workshop paper on Foolbox <https://arxiv.org/abs/1707.04131>
_ using the following BibTeX entries:
.. code-block::
@article{rauber2017foolboxnative, doi = {10.21105/joss.02607}, url = {https://doi.org/10.21105/joss.02607}, year = {2020}, publisher = {The Open Journal}, volume = {5}, number = {53}, pages = {2607}, author = {Jonas Rauber and Roland Zimmermann and Matthias Bethge and Wieland Brendel}, title = {Foolbox Native: Fast adversarial attacks to benchmark the robustness of machine learning models in PyTorch, TensorFlow, and JAX}, journal = {Journal of Open Source Software} }
.. code-block::
@inproceedings{rauber2017foolbox, title={Foolbox: A Python toolbox to benchmark the robustness of machine learning models}, author={Rauber, Jonas and Brendel, Wieland and Bethge, Matthias}, booktitle={Reliable Machine Learning in the Wild Workshop, 34th International Conference on Machine Learning}, year={2017}, url={http://arxiv.org/abs/1707.04131}, }
ð Contributions
We welcome contributions of all kind, please have a look at our
development guidelines <https://foolbox.jonasrauber.de/guide/development.html>
.
In particular, you are invited to contribute
new adversarial attacks <https://foolbox.jonasrauber.de/guide/adding_attacks.html>
.
If you would like to help, you can also have a look at the issues that are
marked with contributions welcome <https://github.com/bethgelab/foolbox/issues?q=is%3Aopen+is%3Aissue+label%3A%22contributions+welcome%22>
_.
ð¡ Questions?
If you have a question or need help, feel free to open an issue on GitHub. Once GitHub Discussions becomes publicly available, we will switch to that.
ð¨ Performance
Foolbox 3.0 is much faster than Foolbox 1 and 2. A basic performance comparison
_ can be found in the performance
folder.
ð Compatibility
We currently test with the following versions:
- PyTorch 1.10.1
- TensorFlow 2.6.3
- JAX 0.2.517
- NumPy 1.18.1
.. _performance comparison: performance/README.md
Top Related Projects
An adversarial example library for constructing attacks, building defenses, and benchmarking both
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
A Toolbox for Adversarial Robustness Research
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot