Convert Figma logo to code with AI

yenchenlin logoawesome-adversarial-machine-learning

A curated list of awesome adversarial machine learning resources

1,795
281
1,795
5

Top Related Projects

2,716

A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX

An adversarial example library for constructing attacks, building defenses, and benchmarking both

Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams

1,378

Advbox is a toolbox to generate adversarial examples that fool neural networks in PaddlePaddle、PyTorch、Caffe2、MxNet、Keras、TensorFlow and Advbox can benchmark the robustness of machine learning models. Advbox give a command line tool to generate adversarial examples with Zero-Coding.

Quick Overview

The "awesome-adversarial-machine-learning" repository is a curated list of resources related to adversarial machine learning. It provides a comprehensive collection of papers, tutorials, books, and tools focused on the security and robustness of machine learning models against adversarial attacks.

Pros

  • Extensive collection of resources covering various aspects of adversarial machine learning
  • Well-organized structure with clear categorization of different topics
  • Regularly updated with new and relevant resources
  • Includes both theoretical and practical resources, catering to researchers and practitioners

Cons

  • May be overwhelming for beginners due to the large volume of information
  • Some links may become outdated over time if not regularly maintained
  • Lacks detailed explanations or summaries for each resource
  • Limited coverage of some emerging topics in adversarial machine learning

Note: As this is not a code library, the code example and quick start sections have been omitted.

Competitor Comparisons

2,716

A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX

Pros of Foolbox

  • Practical implementation of adversarial attacks and defenses
  • Actively maintained with regular updates and contributions
  • Supports multiple deep learning frameworks (PyTorch, TensorFlow, JAX)

Cons of Foolbox

  • Focused on implementation rather than comprehensive literature review
  • May require more technical expertise to use effectively
  • Limited to specific attack and defense methods implemented in the library

Code Comparison

Foolbox (implementation-focused):

import foolbox as fb
model = fb.PyTorchModel(net, bounds=(0, 1))
attack = fb.attacks.FGSM()
epsilons = [0.0, 0.001, 0.01, 0.03, 0.1, 0.3, 0.5, 1.0]
_, advs, success = attack(model, images, labels, epsilons=epsilons)

Awesome Adversarial Machine Learning (resource-focused):

## Attacks
- [Fast Gradient Sign Method (FGSM)](https://arxiv.org/abs/1412.6572)
- [Carlini & Wagner Attacks](https://arxiv.org/abs/1608.04644)
- [DeepFool](https://arxiv.org/abs/1511.04599)

Foolbox provides ready-to-use implementations, while Awesome Adversarial Machine Learning offers a curated list of resources and papers for further study.

An adversarial example library for constructing attacks, building defenses, and benchmarking both

Pros of cleverhans

  • Provides a comprehensive library of adversarial example generation and defense methods
  • Offers practical implementations that can be directly used in research and development
  • Regularly updated with new attack and defense techniques

Cons of cleverhans

  • Focuses primarily on implementation rather than curating a list of resources
  • May have a steeper learning curve for beginners in the field
  • Limited to Python and TensorFlow ecosystems

Code comparison

cleverhans:

import cleverhans
from cleverhans.attacks import FastGradientMethod
from cleverhans.utils_keras import KerasModelWrapper

model_wrap = KerasModelWrapper(model)
fgsm = FastGradientMethod(model_wrap, sess=sess)
adv_x = fgsm.generate(x, **fgsm_params)

awesome-adversarial-machine-learning:

No direct code implementation available as it is a curated list of resources.

Summary

cleverhans is a practical library for implementing adversarial attacks and defenses, while awesome-adversarial-machine-learning is a curated list of resources on the topic. cleverhans offers hands-on tools for researchers and developers, but may be more complex for beginners. awesome-adversarial-machine-learning provides a broader overview of the field and is more accessible for those starting to explore adversarial machine learning.

Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams

Pros of adversarial-robustness-toolbox

  • Provides a comprehensive library of tools and algorithms for adversarial machine learning
  • Offers practical implementations and ready-to-use code for various attack and defense methods
  • Actively maintained with regular updates and contributions from the community

Cons of adversarial-robustness-toolbox

  • Focuses primarily on implementation rather than curating a list of resources
  • May have a steeper learning curve for beginners due to its extensive codebase
  • Limited in providing an overview of the field compared to a curated list

Code Comparison

adversarial-robustness-toolbox:

from art.attacks.evasion import FastGradientMethod
from art.estimators.classification import KerasClassifier

# Create a classifier
classifier = KerasClassifier(model=model, clip_values=(0, 1))

# Create an attack
attack = FastGradientMethod(classifier, eps=0.1)

awesome-adversarial-machine-learning:

# No code implementation, as it's a curated list of resources

## Attacks
- [Fast Gradient Sign Method (FGSM)](https://arxiv.org/abs/1412.6572)
- [Carlini & Wagner Attacks](https://arxiv.org/abs/1608.04644)
1,378

Advbox is a toolbox to generate adversarial examples that fool neural networks in PaddlePaddle、PyTorch、Caffe2、MxNet、Keras、TensorFlow and Advbox can benchmark the robustness of machine learning models. Advbox give a command line tool to generate adversarial examples with Zero-Coding.

Pros of AdvBox

  • Provides a comprehensive toolbox for adversarial attack and defense methods
  • Offers implementations for various deep learning frameworks (PaddlePaddle, PyTorch, TensorFlow)
  • Includes tutorials and examples for practical usage

Cons of AdvBox

  • Less frequently updated compared to awesome-adversarial-machine-learning
  • Focuses primarily on implementation rather than curating a wide range of resources
  • May have a steeper learning curve for beginners

Code Comparison

AdvBox example (attack implementation):

attack = PGD(model)
adv_x = attack(x, y)

awesome-adversarial-machine-learning doesn't provide code implementations directly, as it's a curated list of resources.

Summary

AdvBox is a practical toolbox for adversarial machine learning, offering implementations and examples across multiple frameworks. awesome-adversarial-machine-learning, on the other hand, serves as a comprehensive resource list, providing links to various papers, tools, and tutorials in the field. While AdvBox is more hands-on, awesome-adversarial-machine-learning offers a broader overview of the topic, making it potentially more suitable for researchers and those seeking to explore the field's landscape.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

:warning: Deprecated

I no longer include up-to-date papers, but the list is still a good reference for starters.

Awesome Adversarial Machine Learning: Awesome

A curated list of awesome adversarial machine learning resources, inspired by awesome-computer-vision.

Table of Contents

Blogs

Papers

General

Attack

Image Classification

Reinforcement Learning

Segmentation & Object Detection

VAE-GAN

Speech Recognition

Questiona Answering System

Defence

Adversarial Training

Defensive Distillation

Generative Model

Regularization

Others

Talks

Licenses

License

CC0

To the extent possible under law, Yen-Chen Lin has waived all copyright and related or neighboring rights to this work.