Convert Figma logo to code with AI

goodfeli logoadversarial

Code and hyperparameters for the paper "Generative Adversarial Networks"

3,841
1,095
3,841
7

Top Related Projects

An adversarial example library for constructing attacks, building defenses, and benchmarking both

2,716

A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX

A Toolbox for Adversarial Robustness Research

Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams

Quick Overview

The goodfeli/adversarial repository is a collection of code and resources related to adversarial examples in machine learning. It contains implementations of various adversarial attack methods and defenses, as well as tutorials and research papers on the topic. The repository serves as a valuable resource for researchers and practitioners interested in studying and improving the robustness of machine learning models.

Pros

  • Comprehensive collection of adversarial attack and defense methods
  • Well-documented code with clear explanations and examples
  • Includes implementations for both TensorFlow and PyTorch frameworks
  • Regularly updated with new research and techniques

Cons

  • Some parts of the codebase may be outdated or not compatible with the latest versions of deep learning frameworks
  • Limited focus on more recent adversarial techniques and defenses
  • Lacks a standardized evaluation framework for comparing different methods
  • May require significant computational resources to run some of the more complex attacks and defenses

Code Examples

  1. Generating an adversarial example using the Fast Gradient Sign Method (FGSM):
import tensorflow as tf
from cleverhans.attacks import FastGradientMethod
from cleverhans.utils_tf import model_eval

# Assume 'model' is a pre-trained TensorFlow model
# and 'x' and 'y' are input and label tensors

fgsm = FastGradientMethod(model, sess=sess)
adv_x = fgsm.generate(x, eps=0.3, clip_min=0., clip_max=1.)
adv_accuracy = model_eval(sess, x, y, adv_x, batch_size=128)
  1. Implementing adversarial training:
import torch
from torch import optim
from cleverhans.torch.attacks import projected_gradient_descent

def adversarial_train(model, train_loader, optimizer, epsilon):
    for inputs, targets in train_loader:
        optimizer.zero_grad()
        
        # Generate adversarial examples
        adv_inputs = projected_gradient_descent(model, inputs, epsilon, 0.01, 40, np.inf)
        
        # Train on both clean and adversarial examples
        outputs = model(inputs)
        adv_outputs = model(adv_inputs)
        loss = 0.5 * (F.cross_entropy(outputs, targets) + F.cross_entropy(adv_outputs, targets))
        
        loss.backward()
        optimizer.step()
  1. Evaluating model robustness using the Carlini & Wagner (C&W) attack:
from cleverhans.attacks import CarliniWagnerL2
from cleverhans.utils_tf import model_eval

# Assume 'model' is a pre-trained TensorFlow model
# and 'x' and 'y' are input and label tensors

cw = CarliniWagnerL2(model, sess=sess)
adv_x = cw.generate(x, y=y, confidence=0.5)
cw_accuracy = model_eval(sess, x, y, adv_x, batch_size=128)

Getting Started

To get started with the goodfeli/adversarial repository:

  1. Clone the repository:

    git clone https://github.com/goodfeli/adversarial.git
    cd adversarial
    
  2. Install the required dependencies:

    pip install -r requirements.txt
    
  3. Run the examples or use the provided modules in your own projects:

    from cleverhans.attacks import FastGradientMethod
    from cleverhans.utils_tf import model_eval
    
    # Your code here to load a model and dataset
    
    fgsm = FastGradientMethod(model, sess=sess)
    adv_x = fgsm.generate(x, eps=0.3)
    

For more detailed instructions and examples, refer to the repository's README and documentation.

Competitor Comparisons

An adversarial example library for constructing attacks, building defenses, and benchmarking both

Pros of cleverhans

  • More comprehensive library with a wider range of attack and defense methods
  • Better documentation and examples for easier implementation
  • Actively maintained with regular updates and contributions

Cons of cleverhans

  • Larger codebase, potentially more complex to navigate
  • May have a steeper learning curve for beginners
  • Requires more dependencies and setup

Code Comparison

cleverhans:

from cleverhans.future.tf2.attacks import fast_gradient_method
from cleverhans.future.tf2.attacks import projected_gradient_descent

fgm_attack = fast_gradient_method(model, x, eps=0.3, norm=np.inf)
pgd_attack = projected_gradient_descent(model, x, eps=0.3, eps_iter=0.01, nb_iter=40)

adversarial:

from adversarial import fgm

x_adv = fgm(model, x, eps=0.3, ord=np.inf)

The cleverhans library offers more attack options and flexibility, while adversarial provides a simpler interface for basic attacks. cleverhans is better suited for researchers and advanced practitioners, whereas adversarial might be more appropriate for quick implementations or educational purposes.

2,716

A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX

Pros of Foolbox

  • More actively maintained with regular updates
  • Supports a wider range of deep learning frameworks (PyTorch, TensorFlow, JAX, etc.)
  • Extensive documentation and examples for various attack methods

Cons of Foolbox

  • Steeper learning curve for beginners
  • May have higher computational overhead for some attacks

Code Comparison

Foolbox:

import foolbox as fb
model = fb.PyTorchModel(net, bounds=(0, 1))
attack = fb.attacks.PGD()
epsilons = [0.0, 0.001, 0.01, 0.03, 0.1, 0.3, 0.5, 1.0]
_, advs, success = attack(model, images, labels, epsilons=epsilons)

Adversarial:

from cleverhans.attacks import FastGradientMethod
fgsm = FastGradientMethod(model)
adv_x = fgsm.generate(x, eps=0.3, clip_min=0, clip_max=1)

Foolbox offers a more flexible API with support for multiple epsilon values in a single call, while Adversarial provides a simpler interface for basic attacks. Foolbox's modular design allows for easier customization and extension of attack methods.

A Toolbox for Adversarial Robustness Research

Pros of advertorch

  • More comprehensive and actively maintained library
  • Wider range of attack and defense methods implemented
  • Better documentation and examples for usage

Cons of advertorch

  • Steeper learning curve due to more complex API
  • Requires PyTorch, which may not be suitable for all users
  • Larger codebase, potentially harder to customize

Code Comparison

advertorch:

from advertorch.attacks import PGDAttack
adversary = PGDAttack(model, loss_fn=nn.CrossEntropyLoss(), eps=0.3, 
                      nb_iter=40, eps_iter=0.01, rand_init=True)
adv_examples = adversary.perturb(images, labels)

adversarial:

from cleverhans.attacks import ProjectedGradientDescent
pgd = ProjectedGradientDescent(model, sess=sess)
adv_x = pgd.generate(x, eps=0.3, eps_iter=0.01, nb_iter=40)

advertorch offers a more object-oriented approach with a dedicated adversary object, while adversarial uses a functional style. advertorch's implementation is more flexible and easier to extend, but may require more setup. adversarial's implementation is simpler and more straightforward for basic use cases.

Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams

Pros of adversarial-robustness-toolbox

  • Comprehensive library with a wide range of adversarial attack and defense techniques
  • Active development and maintenance with regular updates
  • Extensive documentation and examples for ease of use

Cons of adversarial-robustness-toolbox

  • Steeper learning curve due to its extensive feature set
  • Potentially slower execution for simple tasks compared to lightweight alternatives

Code Comparison

adversarial-robustness-toolbox:

from art.attacks.evasion import FastGradientMethod
from art.estimators.classification import KerasClassifier

classifier = KerasClassifier(model=model, clip_values=(0, 1))
attack = FastGradientMethod(classifier, eps=0.1)
x_adv = attack.generate(x)

adversarial:

from cleverhans.attacks import FastGradientMethod
from cleverhans.utils_keras import KerasModelWrapper

wrap = KerasModelWrapper(model)
fgsm = FastGradientMethod(wrap, sess=sess)
x_adv = fgsm.generate(x, eps=0.1)

Both repositories provide implementations of adversarial attacks and defenses, but adversarial-robustness-toolbox offers a more extensive set of tools and better documentation. However, adversarial might be simpler to use for basic tasks. The code examples show similar implementations of the Fast Gradient Sign Method attack, with adversarial-robustness-toolbox using a slightly different structure and naming convention.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

Generative Adversarial Networks

This repository contains the code and hyperparameters for the paper:

"Generative Adversarial Networks." Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio. ArXiv 2014.

Please cite this paper if you use the code in this repository as part of a published research project.

We are an academic lab, not a software company, and have no personnel devoted to documenting and maintaing this research code. Therefore this code is offered with absolutely no support. Exact reproduction of the numbers in the paper depends on exact reproduction of many factors, including the version of all software dependencies and the choice of underlying hardware (GPU model, etc). We used NVIDA Ge-Force GTX-580 graphics cards; other hardware will use different tree structures for summation and incur different rounding error. If you do not reproduce our setup exactly you should expect to need to re-tune your hyperparameters slight for your new setup.

Moreover, we have not integrated any unit tests for this code into Theano or Pylearn2 so subsequent changes to those libraries may break the code in this repository. If you encounter problems with this code, you should make sure that you are using the development branch of Pylearn2 and Theano, and use "git checkout" to go to a commit from approximately June 9, 2014.

This code itself requires no installation besides making sure that the "adversarial" directory is in a directory in your PYTHONPATH. If installed correctly, 'python -c "import adversarial"' will work. You must also install Pylearn2 and Pylearn2's dependencies (Theano, numpy, etc.)

parzen_ll.py is the script used to estimate the log likelihood of the model using the Parzen density technique.

Call pylearn2/scripts/train.py on the various yaml files in this repository to train the model for each dataset reported in the paper. The names of *.yaml are fairly self-explanatory.