Convert Figma logo to code with AI

advboxes logoAdvBox

Advbox is a toolbox to generate adversarial examples that fool neural networks in PaddlePaddle、PyTorch、Caffe2、MxNet、Keras、TensorFlow and Advbox can benchmark the robustness of machine learning models. Advbox give a command line tool to generate adversarial examples with Zero-Coding.

1,378
262
1,378
15

Top Related Projects

An adversarial example library for constructing attacks, building defenses, and benchmarking both

2,716

A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX

Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams

A Toolbox for Adversarial Robustness Research

Quick Overview

AdvBox is an open-source toolbox for generating adversarial examples and conducting adversarial attacks on deep learning models. It provides a comprehensive set of attack algorithms and defense methods, supporting various deep learning frameworks such as PaddlePaddle, PyTorch, and TensorFlow.

Pros

  • Supports multiple deep learning frameworks, making it versatile for different environments
  • Offers a wide range of attack algorithms and defense methods
  • Provides easy-to-use APIs for generating adversarial examples
  • Regularly updated with new attack and defense techniques

Cons

  • Documentation could be more comprehensive, especially for beginners
  • Some advanced features may require a deep understanding of adversarial machine learning
  • Limited support for certain specialized domains or model architectures

Code Examples

  1. Generating an adversarial example using FGSM attack:
import paddle
from advbox.attacks import FGSM
from advbox.models import PaddleModel

# Load your model and data
model = YourPaddleModel()
x, y = load_data()

# Create an AdvBox model
adv_model = PaddleModel(model, bounds=[0, 1])

# Create FGSM attack
attack = FGSM(adv_model)

# Generate adversarial example
adv_x = attack(x, y)
  1. Implementing a basic defense using adversarial training:
from advbox.defenses import AdversarialTraining

# Create adversarial training defense
defense = AdversarialTraining(adv_model, attack_method=FGSM)

# Train the model with adversarial examples
defense.fit(train_loader, epochs=10)
  1. Evaluating model robustness:
from advbox.evaluation import Evaluator

# Create an evaluator
evaluator = Evaluator(adv_model)

# Evaluate model robustness against multiple attacks
results = evaluator.evaluate(test_loader, attacks=[FGSM(), PGD(), CW()])
print(results)

Getting Started

To get started with AdvBox, follow these steps:

  1. Install AdvBox:
pip install advbox
  1. Import necessary modules:
from advbox.models import PaddleModel
from advbox.attacks import FGSM
from advbox.defenses import AdversarialTraining
  1. Load your model and create an AdvBox model:
model = YourPaddleModel()
adv_model = PaddleModel(model, bounds=[0, 1])
  1. Choose an attack method and generate adversarial examples:
attack = FGSM(adv_model)
adv_x = attack(x, y)
  1. Implement defenses and evaluate model robustness as needed.

Competitor Comparisons

An adversarial example library for constructing attacks, building defenses, and benchmarking both

Pros of cleverhans

  • More comprehensive library with a wider range of attack and defense methods
  • Better documentation and examples for ease of use
  • Active development and maintenance with regular updates

Cons of cleverhans

  • Steeper learning curve due to its extensive features
  • Potentially slower execution for simpler tasks compared to AdvBox
  • Larger codebase, which may be overwhelming for beginners

Code Comparison

AdvBox example:

import advbox
model = advbox.models.PaddleModel(paddle_model)
attack = advbox.attacks.FGSM(model)
adv_x = attack(x, y)

cleverhans example:

from cleverhans.future.tf2.attacks import fast_gradient_method
model = tf.keras.models.load_model('model.h5')
adv_x = fast_gradient_method(model, x, eps=0.3, norm=np.inf)

Both libraries offer similar functionality for generating adversarial examples, but cleverhans provides more options and flexibility in its implementation. AdvBox focuses on simplicity and ease of use, while cleverhans offers a more comprehensive set of tools for advanced users and researchers in the field of adversarial machine learning.

2,716

A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX

Pros of Foolbox

  • More comprehensive documentation and examples
  • Wider range of supported attack methods
  • Active development and frequent updates

Cons of Foolbox

  • Steeper learning curve for beginners
  • Potentially slower execution for some attacks

Code Comparison

Foolbox:

import foolbox as fb
model = fb.models.PyTorchModel(net, bounds=(0, 1))
attack = fb.attacks.FGSM()
epsilons = [0.0, 0.001, 0.01, 0.03, 0.1, 0.3, 0.5, 1.0]
_, advs, success = attack(model, images, labels, epsilons=epsilons)

AdvBox:

from advbox.attacks.gradient_method import FGSM
from advbox.models.paddle import PaddleModel
attack = FGSM(model)
adversary = attack(adversary, epsilon=0.1)

Both libraries offer similar functionality for generating adversarial examples, but Foolbox provides more granular control over attack parameters and supports multiple epsilon values in a single call. AdvBox's syntax is more concise but may offer less flexibility for advanced use cases.

Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams

Pros of adversarial-robustness-toolbox

  • More comprehensive library with a wider range of attacks, defenses, and metrics
  • Better documentation and examples for ease of use
  • Active development and regular updates

Cons of adversarial-robustness-toolbox

  • Steeper learning curve due to its extensive features
  • Potentially slower execution for simpler tasks

Code Comparison

AdvBox:

from advbox.attacks import FGSM
from advbox.models import PaddleModel

attack = FGSM(model)
adversarial_examples = attack(images, labels)

adversarial-robustness-toolbox:

from art.attacks.evasion import FastGradientMethod
from art.estimators.classification import KerasClassifier

attack = FastGradientMethod(classifier)
adversarial_examples = attack.generate(x=images)

Both libraries offer similar functionality for generating adversarial examples, but adversarial-robustness-toolbox provides more options and flexibility in its implementation. AdvBox focuses on simplicity and ease of use, while adversarial-robustness-toolbox offers a more comprehensive set of tools for advanced users and researchers in the field of adversarial machine learning.

A Toolbox for Adversarial Robustness Research

Pros of advertorch

  • More comprehensive set of attack and defense algorithms
  • Better documentation and examples
  • Active development and maintenance

Cons of advertorch

  • Steeper learning curve for beginners
  • Primarily focused on PyTorch, limiting flexibility

Code Comparison

advertorch:

from advertorch.attacks import PGDAttack
adversary = PGDAttack(model, loss_fn=nn.CrossEntropyLoss(), eps=0.3,
                      nb_iter=40, eps_iter=0.01, rand_init=True)
adv_examples = adversary.perturb(x, y)

AdvBox:

from advbox.attacks import PGDAttack
attack = PGDAttack(model)
adv_examples = attack(x, y, epsilon=0.3, num_steps=40, step_size=0.01)

Summary

advertorch offers a more extensive toolkit for adversarial machine learning, with better documentation and ongoing development. However, it may be more challenging for beginners and is primarily designed for PyTorch users. AdvBox, while potentially easier to use, has a more limited set of features and less active development. The code comparison shows that both libraries offer similar functionality, but with slightly different syntax and implementation details.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

Advbox Family

logo

Advbox Family is a series of AI model security tools set of Baidu Open Source,including the generation, detection and protection of adversarial examples, as well as attack and defense cases for different AI applications.

Advbox Family support Python 3.*.

Our Work

AdvSDK

A Lightweight Adv SDK For PaddlePaddle to generate adversarial examples.

Homepage of AdvSDK

AdversarialBox

Adversarialbox is a toolbox to generate adversarial examples that fool neural networks in PaddlePaddle、PyTorch、Caffe2、MxNet、Keras、TensorFlow and Advbox can benchmark the robustness of machine learning models.Advbox give a command line tool to generate adversarial examples with Zero-Coding. It is inspired and based on FoolBox v1.

Homepage of AdversarialBox

AdvDetect

AdvDetect is a toolbox to detect adversarial examples from massive data.

Homepage of AdvDetect

AdvPoison

Data poisoning

AI applications

Face Recognition Attack

Homepage of Face Recognition Attack

Stealth T-shirt

On defcon, we demonstrated T-shirts that can disappear under smart cameras. Under this sub-project, we open-source the programs and deployment methods of smart cameras for demonstration.

Homepage of Stealth T-shirt

pic1

Fake Face Detect

The restful API is used to detect whether the face in the picture/video is a false face.

Homepage of Fake Face Detect

pic2

Paper and ppt of Advbox Family

How to cite

If you use AdvBox in an academic publication, please cite as:

@misc{goodman2020advbox,
    title={Advbox: a toolbox to generate adversarial examples that fool neural networks},
    author={Dou Goodman and Hao Xin and Wang Yang and Wu Yuesheng and Xiong Junfeng and Zhang Huan},
    year={2020},
    eprint={2001.05574},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

Cloud-based Image Classification Service is Not Robust to Affine Transformation: A Forgotten Battlefield

@inproceedings{goodman2019cloud,
  title={Cloud-based Image Classification Service is Not Robust to Affine Transformation: A Forgotten Battlefield},
  author={Goodman, Dou and Hao, Xin and Wang, Yang and Tang, Jiawei and Jia, Yunhan and Wei, Tao and others},
  booktitle={Proceedings of the 2019 ACM SIGSAC Conference on Cloud Computing Security Workshop},
  pages={43--43},
  year={2019},
  organization={ACM}
}

Who use/cite AdvBox

  • Wu, Winston and Arendt, Dustin and Volkova, Svitlana; Evaluating Neural Model Robustness for Machine Comprehension; Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, 2021, pp. 2470-2481
  • Pablo Navarrete Michelini, Hanwen Liu, Yunhua Lu, Xingqun Jiang; A Tour of Convolutional Networks Guided by Linear Interpreters; The IEEE International Conference on Computer Vision (ICCV), 2019, pp. 4753-4762
  • Ling, Xiang and Ji, Shouling and Zou, Jiaxu and Wang, Jiannan and Wu, Chunming and Li, Bo and Wang, Ting; Deepsec: A uniform platform for security analysis of deep learning model ; IEEE S&P, 2019
  • Deng, Ting and Zeng, Zhigang; Generate adversarial examples by spatially perturbing on the meaningful area; Pattern Recognition Letters[J], 2019, pp. 632-638

Issues report

https://github.com/baidu/AdvBox/issues

License

AdvBox support Apache License 2.0