Convert Figma logo to code with AI

jcjohnson logofast-neural-style

Feedforward style transfer

4,278
815
4,278
137

Top Related Projects

TensorFlow CNN for fast style transfer ⚡🖥🎨🖼

22,218

A set of examples around pytorch in Vision, Text, Reinforcement Learning, etc.

19,130

Magenta: Music and Art Generation with Machine Intelligence

Code and data for paper "Deep Photo Style Transfer": https://arxiv.org/abs/1703.07511

Neural style in TensorFlow! 🎨

An implementation of "A Neural Algorithm of Artistic Style" by L. Gatys, A. Ecker, and M. Bethge. http://arxiv.org/abs/1508.06576.

Quick Overview

Fast Neural Style is a PyTorch implementation of neural style transfer, allowing users to apply artistic styles to images quickly. It uses a feed-forward neural network to generate stylized images in real-time, making it much faster than traditional optimization-based methods.

Pros

  • Extremely fast stylization, capable of real-time performance
  • Supports both image and video stylization
  • Pre-trained models available for immediate use
  • Customizable and extensible for training new style models

Cons

  • Limited to styles it has been trained on
  • May produce less accurate results compared to slower optimization methods
  • Requires significant computational resources for training new models
  • Documentation could be more comprehensive for beginners

Code Examples

  1. Stylizing an image:
from fast_neural_style import stylize

input_image = 'path/to/input.jpg'
output_image = 'path/to/output.jpg'
model = 'models/starry_night.pth'

stylize(input_image, output_image, model)
  1. Stylizing a video:
from fast_neural_style import stylize_video

input_video = 'path/to/input.mp4'
output_video = 'path/to/output.mp4'
model = 'models/mosaic.pth'

stylize_video(input_video, output_video, model)
  1. Training a new style model:
from fast_neural_style import train

content_dir = 'path/to/content/images'
style_image = 'path/to/style/image.jpg'
output_model = 'path/to/output/model.pth'

train(content_dir, style_image, output_model, epochs=2, batch_size=4)

Getting Started

  1. Clone the repository:

    git clone https://github.com/jcjohnson/fast-neural-style.git
    cd fast-neural-style
    
  2. Install dependencies:

    pip install -r requirements.txt
    
  3. Download pre-trained models:

    sh models/download_style_transfer_models.sh
    
  4. Stylize an image:

    python neural_style/neural_style.py eval --content-image </path/to/content/image> --model </path/to/saved/model> --output-image </path/to/output/image> --cuda 0
    

Replace the paths with your actual image and model paths. Use --cuda 1 if you want to run on GPU.

Competitor Comparisons

TensorFlow CNN for fast style transfer ⚡🖥🎨🖼

Pros of Fast-Style-Transfer

  • Supports both TensorFlow 1.x and 2.x versions
  • Includes a pre-trained VGG19 network for feature extraction
  • Offers a more user-friendly interface for training and stylization

Cons of Fast-Style-Transfer

  • May have slightly slower inference time compared to Fast-Neural-Style
  • Requires more dependencies and setup steps
  • Limited to specific pre-trained models for style transfer

Code Comparison

Fast-Style-Transfer:

stylizer = ffwd_to_img(content_img, style_path, checkpoint_dir)
stylized_image = stylizer.stylize(content_img)

Fast-Neural-Style:

local img = image.load(input_image, 3)
local output = model:forward(img):float()
image.save(output_image, output)

Fast-Style-Transfer uses Python and TensorFlow, while Fast-Neural-Style is implemented in Lua and Torch. Fast-Style-Transfer provides a more straightforward API for stylization, whereas Fast-Neural-Style requires manual loading and processing of images.

Both repositories offer efficient neural style transfer implementations, but Fast-Style-Transfer may be more accessible for users familiar with Python and TensorFlow ecosystems.

22,218

A set of examples around pytorch in Vision, Text, Reinforcement Learning, etc.

Pros of examples

  • Broader scope: Covers multiple AI/ML topics beyond just neural style transfer
  • More actively maintained: Regular updates and contributions from the PyTorch community
  • Better documentation: Includes detailed explanations and comments for each example

Cons of examples

  • Less focused: Not specialized for fast neural style transfer like fast-neural-style
  • Potentially more complex: May require more setup and understanding of PyTorch ecosystem

Code Comparison

fast-neural-style:

local cmd = torch.CmdLine()
cmd:option('-style_image', 'examples/inputs/seated-nude.jpg', 'Style target image')
cmd:option('-content_image', 'examples/inputs/tubingen.jpg', 'Content target image')
cmd:option('-image_size', 512, 'Maximum height / width of generated image')
local opt = cmd:parse(arg)

examples (neural style transfer):

parser = argparse.ArgumentParser(description='PyTorch Neural Style Transfer')
parser.add_argument('--content-image', type=str, default='picasso.jpg',
                    help='path to content image')
parser.add_argument('--style-image', type=str, default='dancing.jpg',
                    help='path to style image')
parser.add_argument('--output-image', type=str, default='output.jpg',
                    help='path to output image')
args = parser.parse_args()
19,130

Magenta: Music and Art Generation with Machine Intelligence

Pros of Magenta

  • Broader scope: Covers various aspects of music and art generation, not just style transfer
  • Active development: Regularly updated with new features and improvements
  • Extensive documentation and examples for different use cases

Cons of Magenta

  • Steeper learning curve due to its broader scope and more complex architecture
  • Potentially higher resource requirements for running some models
  • May be overkill for users solely interested in style transfer

Code Comparison

Fast-Neural-Style:

local cmd = torch.CmdLine()
cmd:option('-style_image', 'examples/inputs/starry_night.jpg', 'Style target image')
cmd:option('-content_image', 'examples/inputs/tubingen.jpg', 'Content target image')
cmd:option('-image_size', 512, 'Maximum height / width of generated image')

Magenta:

import magenta.music as mm
from magenta.models.melody_rnn import melody_rnn_sequence_generator

bundle = melody_rnn_sequence_generator.get_bundle('attention_rnn')
generator_map = melody_rnn_sequence_generator.get_generator_map()

The code snippets highlight the different focus areas of the two projects. Fast-Neural-Style is specifically tailored for style transfer in images, while Magenta offers a broader range of tools for music and art generation, including melody creation in this example.

Code and data for paper "Deep Photo Style Transfer": https://arxiv.org/abs/1703.07511

Pros of deep-photo-styletransfer

  • Produces more photorealistic results, preserving the structure of the original photo
  • Offers finer control over the style transfer process with additional parameters
  • Includes a photorealistic smoothing step for improved output quality

Cons of deep-photo-styletransfer

  • Slower processing time compared to fast-neural-style
  • Requires more setup and dependencies, including CUDA and cuDNN
  • Less user-friendly for beginners due to more complex configuration

Code Comparison

deep-photo-styletransfer:

local content_image = image.load(params.content_image, 3)
local style_image = image.load(params.style_image, 3)
local content_layers = params.content_layers
local style_layers = params.style_layers

fast-neural-style:

local cmd = torch.CmdLine()
cmd:option('-style_image', '', 'Path to style image')
cmd:option('-content_image', '', 'Path to content image')
cmd:option('-model', '', 'Path to trained model')

Both repositories use Lua for their main scripts, but deep-photo-styletransfer has more complex initialization and parameter handling. fast-neural-style focuses on simplicity and speed, while deep-photo-styletransfer prioritizes photorealistic results at the cost of increased complexity and processing time.

Neural style in TensorFlow! 🎨

Pros of neural-style

  • Offers more flexibility in customizing the style transfer process
  • Provides a wider range of style transfer options and parameters
  • Allows for finer control over the output image quality

Cons of neural-style

  • Significantly slower processing time compared to fast-neural-style
  • Requires more computational resources for style transfer
  • Less suitable for real-time applications or processing large batches of images

Code Comparison

neural-style:

parser.add_argument('--content_weight', type=float, default=5e0)
parser.add_argument('--style_weight', type=float, default=1e2)
parser.add_argument('--tv_weight', type=float, default=1e-3)
parser.add_argument('--num_iterations', type=int, default=1000)

fast-neural-style:

cmd:option('-style_weight', 5e0)
cmd:option('-content_weight', 1e0)
cmd:option('-tv_weight', 1e-6)
cmd:option('-num_iterations', 300)

The code snippets show that neural-style offers more granular control over weights and iterations, while fast-neural-style focuses on efficiency with fewer parameters and iterations.

An implementation of "A Neural Algorithm of Artistic Style" by L. Gatys, A. Ecker, and M. Bethge. http://arxiv.org/abs/1508.06576.

Pros of style-transfer

  • Supports multiple style transfer algorithms (Gatys et al., Chen & Schmidt, etc.)
  • Includes pre-trained models for quick style transfer
  • Offers both CPU and GPU implementations

Cons of style-transfer

  • Less optimized for real-time performance compared to fast-neural-style
  • May require more setup and dependencies
  • Documentation is less comprehensive

Code Comparison

style-transfer:

from style_transfer import StyleTransfer

st = StyleTransfer()
stylized = st.transfer_style("content.jpg", "style.jpg")

fast-neural-style:

th neural_style.lua -style_image style.jpg -content_image content.jpg

fast-neural-style focuses on speed and efficiency, making it better suited for real-time applications. It uses a feed-forward network to achieve faster processing times. style-transfer, on the other hand, offers more flexibility with multiple algorithms and pre-trained models, but may be slower for real-time use.

fast-neural-style is implemented in Lua and Torch, while style-transfer uses Python and various deep learning frameworks. This difference in implementation languages and frameworks may influence the choice depending on the user's familiarity and project requirements.

Both repositories provide valuable tools for neural style transfer, with fast-neural-style emphasizing speed and style-transfer offering more algorithmic options.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

fast-neural-style

This is the code for the paper

Perceptual Losses for Real-Time Style Transfer and Super-Resolution
Justin Johnson, Alexandre Alahi, Li Fei-Fei
Presented at ECCV 2016

The paper builds on A Neural Algorithm of Artistic Style by Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge by training feedforward neural networks that apply artistic styles to images. After training, our feedforward networks can stylize images hundreds of times faster than the optimization-based method presented by Gatys et al.

This repository also includes an implementation of instance normalization as described in the paper Instance Normalization: The Missing Ingredient for Fast Stylization by Dmitry Ulyanov, Andrea Vedaldi, and Victor Lempitsky. This simple trick significantly improves the quality of feedforward style transfer models.

Stylizing this image of the Stanford campus at a resolution of 1200x630 takes 50 milliseconds on a Pascal Titan X:

In this repository we provide:

If you find this code useful for your research, please cite

@inproceedings{Johnson2016Perceptual,
  title={Perceptual losses for real-time style transfer and super-resolution},
  author={Johnson, Justin and Alahi, Alexandre and Fei-Fei, Li},
  booktitle={European Conference on Computer Vision},
  year={2016}
}

Setup

All code is implemented in Torch.

First install Torch, then update / install the following packages:

luarocks install torch
luarocks install nn
luarocks install image
luarocks install lua-cjson

(Optional) GPU Acceleration

If you have an NVIDIA GPU, you can accelerate all operations with CUDA.

First install CUDA, then update / install the following packages:

luarocks install cutorch
luarocks install cunn

(Optional) cuDNN

When using CUDA, you can use cuDNN to accelerate convolutions.

First download cuDNN and copy the libraries to /usr/local/cuda/lib64/. Then install the Torch bindings for cuDNN:

luarocks install cudnn

Pretrained Models

Download all pretrained style transfer models by running the script

bash models/download_style_transfer_models.sh

This will download ten model files (~200MB) to the folder models/.

Models from the paper

The style transfer models we used in the paper will be located in the folder models/eccv16. Here are some example results where we use these models to stylize this image of the Chicago skyline with at an image size of 512:


Models with instance normalization

As discussed in the paper Instance Normalization: The Missing Ingredient for Fast Stylization by Dmitry Ulyanov, Andrea Vedaldi, and Victor Lempitsky, replacing batch normalization with instance normalization significantly improves the quality of feedforward style transfer models.

We have trained several models with instance normalization; after downloading pretrained models they will be in the folder models/instance_norm.

These models use the same architecture as those used in our paper, except with half the number of filters per layer and with instance normalization instead of batch normalization. Using narrower layers makes the models smaller and faster without sacrificing model quality.

Here are some example outputs from these models, with an image size of 1024:



Running on new images

The script fast_neural_style.lua lets you use a trained model to stylize new images:

th fast_neural_style.lua \
  -model models/eccv16/starry_night.t7 \
  -input_image images/content/chicago.jpg \
  -output_image out.png

You can run the same model on an entire directory of images like this:

th fast_neural_style.lua \
  -model models/eccv16/starry_night.t7 \
  -input_dir images/content/ \
  -output_dir out/

You can control the size of the output images using the -image_size flag.

By default this script runs on CPU; to run on GPU, add the flag -gpu specifying the GPU on which to run.

The full set of options for this script is described here.

Webcam demo

You can use the script webcam_demo.lua to run one or more models in real-time off a webcam stream. To run this demo you need to use qlua instead of th:

qlua webcam_demo.lua -models models/instance_norm/candy.t7 -gpu 0

You can run multiple models at the same time by passing a comma-separated list to the -models flag:

qlua webcam_demo.lua \
  -models models/instance_norm/candy.t7,models/instance_norm/udnie.t7 \
  -gpu 0

With a Pascal Titan X you can easily run four models in realtime at 640x480:

The webcam demo depends on a few extra Lua packages:

You can install / update these packages by running:

luarocks install camera
luarocks install qtlua

The full set of options for this script is described here.

Training new models

You can find instructions for training new models here.

Optimization-based Style Transfer

The script slow_neural_style.lua is similar to the original neural-style, and uses the optimization-based style-transfer method described by Gatys et al.

This script uses the same code for computing losses as the feedforward training script, allowing for fair comparisons between feedforward style transfer networks and optimization-based style transfer.

Compared to the original neural-style, this script has the following improvements:

  • Remove dependency on protobuf and loadcaffe
  • Support for many more CNN architectures, including ResNets

The full set of options for this script is described here.

License

Free for personal or research use; for commercial use please contact me.