Convert Figma logo to code with AI

deepfakes logofaceswap

Deepfakes Software For All

51,149
13,110
51,149
25

Top Related Projects

DeepFaceLab is the leading software for creating deepfakes.

This repository contains the source code for the paper First Order Motion Model for Image Animation

4,442

An arbitrary face-swapping framework on images and videos with one single trained model!

10,475

This repository contains the codes of "A Lip Sync Expert Is All You Need for Speech to Lip Generation In the Wild", published at ACM Multimedia 2020. For HD commercial model, please try out Sync Labs

Quick Overview

Faceswap is an open-source deepfake application that allows users to replace faces in images and videos. It utilizes deep learning techniques to create realistic face swaps, providing a user-friendly interface for both beginners and advanced users.

Pros

  • User-friendly GUI for easy operation
  • Supports both image and video face swapping
  • Highly customizable with various training options and models
  • Active community and ongoing development

Cons

  • Potential for misuse in creating misleading or harmful content
  • Computationally intensive, requiring powerful hardware for optimal performance
  • Learning curve for advanced features and fine-tuning
  • Legal and ethical concerns surrounding deepfake technology

Code Examples

# Example 1: Extracting faces from an image
from lib.cli import extract
args = extract.get_arguments()
args.input_dir = "path/to/input/images"
args.output_dir = "path/to/output/faces"
extract.process(args)
# Example 2: Training a model
from lib.cli import train
args = train.get_arguments()
args.input_a = "path/to/faces/personA"
args.input_b = "path/to/faces/personB"
args.model_dir = "path/to/save/model"
train.process(args)
# Example 3: Converting faces in a video
from lib.cli import convert
args = convert.get_arguments()
args.input_dir = "path/to/input/video"
args.output_dir = "path/to/output/video"
args.model_dir = "path/to/trained/model"
convert.process(args)

Getting Started

  1. Clone the repository:

    git clone https://github.com/deepfakes/faceswap.git
    cd faceswap
    
  2. Install dependencies:

    python setup.py
    
  3. Run the GUI:

    python faceswap.py gui
    
  4. Alternatively, use command-line tools:

    python faceswap.py extract -i <input_dir> -o <output_dir>
    python faceswap.py train -A <input_A> -B <input_B> -m <model_dir>
    python faceswap.py convert -i <input_dir> -o <output_dir> -m <model_dir>
    

For detailed instructions and advanced usage, refer to the project's documentation.

Competitor Comparisons

DeepFaceLab is the leading software for creating deepfakes.

Pros of DeepFaceLab

  • More advanced and feature-rich, offering a wider range of models and options
  • Better overall output quality, especially for high-resolution videos
  • More active development and frequent updates

Cons of DeepFaceLab

  • Steeper learning curve, less user-friendly for beginners
  • Requires more powerful hardware for optimal performance
  • Less comprehensive documentation and community support

Code Comparison

DeepFaceLab:

from core.leras import nn
from facelib import FaceType
from models import ModelBase
from samplelib import *

class Model(ModelBase):
    # Model implementation

Faceswap:

from lib.model.nn_blocks import Conv2DOutput, Conv2DBlock
from .original import Model as OriginalModel

class Model(OriginalModel):
    # Model implementation

Both projects use similar deep learning frameworks and techniques, but DeepFaceLab's codebase is more complex and offers more customization options. Faceswap's code is generally more straightforward and easier to understand for beginners.

This repository contains the source code for the paper First Order Motion Model for Image Animation

Pros of first-order-model

  • More versatile, capable of animating various objects beyond faces
  • Requires fewer source images to produce high-quality results
  • Generates smoother and more natural-looking animations

Cons of first-order-model

  • Less specialized for face swapping, which may result in lower quality for specific face-swap tasks
  • Requires more computational resources due to its more complex architecture
  • Less active community support and fewer updates compared to faceswap

Code Comparison

faceswap:

from lib.cli import args
from lib.config import Config
from lib.utils import get_folder

args = args.get_args()
config = Config(args.configfile)

first-order-model:

from demo import load_checkpoints
from demo import make_animation
from skimage import img_as_ubyte
from skimage.transform import resize

The code snippets show that faceswap focuses on configuration and command-line interfaces, while first-order-model emphasizes image processing and animation generation functions. This reflects their different approaches and specializations in face swapping and general object animation, respectively.

4,442

An arbitrary face-swapping framework on images and videos with one single trained model!

Pros of SimSwap

  • Offers real-time face swapping capabilities
  • Provides better identity preservation and expression transfer
  • Supports both image and video face swapping

Cons of SimSwap

  • Requires more computational resources due to its advanced architecture
  • May have a steeper learning curve for beginners
  • Limited customization options compared to Faceswap

Code Comparison

SimSwap:

import simswap
model = simswap.load_model()
result = simswap.swap_face(source_img, target_img, model)

Faceswap:

from lib.cli import FullHelpArgumentParser
from plugins.train.model import Model
from plugins.extract.pipeline import Extractor

args = FullHelpArgumentParser().parse_args()
model = Model(args)
extractor = Extractor(args)

SimSwap focuses on a more streamlined API for face swapping, while Faceswap offers a more modular approach with separate components for extraction and model training. SimSwap's code is generally more concise and easier to use out of the box, but Faceswap provides greater flexibility for advanced users who want to customize various aspects of the face swapping process.

10,475

This repository contains the codes of "A Lip Sync Expert Is All You Need for Speech to Lip Generation In the Wild", published at ACM Multimedia 2020. For HD commercial model, please try out Sync Labs

Pros of Wav2Lip

  • Specializes in lip-syncing, producing more accurate mouth movements
  • Focuses on audio-driven facial animation, ideal for dubbing and voice-over work
  • Faster processing time for lip-sync tasks

Cons of Wav2Lip

  • Limited to lip and lower face region manipulation
  • May struggle with extreme head poses or complex facial expressions
  • Less versatile for full face swapping or general facial manipulation

Code Comparison

Wav2Lip example:

from wav2lip import inference
model = inference.load_model('path/to/model')
result = inference.predict(face='input_face.mp4', audio='input_audio.wav')

Faceswap example:

from lib.cli import FullHelpArgumentParser
from scripts.train import TrainArgs
from scripts.convert import ConvertArgs

parser = FullHelpArgumentParser()
TrainArgs(parser)
ConvertArgs(parser)

Wav2Lip is more focused on lip-syncing tasks, with a simpler API for audio-driven facial animation. Faceswap offers a broader range of facial manipulation options but requires more complex setup and usage. Wav2Lip excels in specific lip-syncing scenarios, while Faceswap provides greater flexibility for full face swapping and general facial modifications.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

deepfakes_faceswap


FaceSwap is a tool that utilizes deep learning to recognize and swap faces in pictures and videos.

    


Emma Stone/Scarlett Johansson FaceSwap using the Phaze-A model


Jennifer Lawrence/Steve Buscemi FaceSwap using the Villain model

Build Status Documentation Status

Make sure you check out INSTALL.md before getting started.

Manifesto

FaceSwap has ethical uses.

When faceswapping was first developed and published, the technology was groundbreaking, it was a huge step in AI development. It was also completely ignored outside of academia because the code was confusing and fragmentary. It required a thorough understanding of complicated AI techniques and took a lot of effort to figure it out. Until one individual brought it together into a single, cohesive collection. It ran, it worked, and as is so often the way with new technology emerging on the internet, it was immediately used to create inappropriate content. Despite the inappropriate uses the software was given originally, it was the first AI code that anyone could download, run and learn by experimentation without having a Ph.D. in math, computer theory, psychology, and more. Before "deepfakes" these techniques were like black magic, only practiced by those who could understand all of the inner workings as described in esoteric and endlessly complicated books and papers.

"Deepfakes" changed all that and anyone could participate in AI development. To us, developers, the release of this code opened up a fantastic learning opportunity. It allowed us to build on ideas developed by others, collaborate with a variety of skilled coders, experiment with AI whilst learning new skills and ultimately contribute towards an emerging technology which will only see more mainstream use as it progresses.

Are there some out there doing horrible things with similar software? Yes. And because of this, the developers have been following strict ethical standards. Many of us don't even use it to create videos, we just tinker with the code to see what it does. Sadly, the media concentrates only on the unethical uses of this software. That is, unfortunately, the nature of how it was first exposed to the public, but it is not representative of why it was created, how we use it now, or what we see in its future. Like any technology, it can be used for good or it can be abused. It is our intention to develop FaceSwap in a way that its potential for abuse is minimized whilst maximizing its potential as a tool for learning, experimenting and, yes, for legitimate faceswapping.

We are not trying to denigrate celebrities or to demean anyone. We are programmers, we are engineers, we are Hollywood VFX artists, we are activists, we are hobbyists, we are human beings. To this end, we feel that it's time to come out with a standard statement of what this software is and isn't as far as us developers are concerned.

  • FaceSwap is not for creating inappropriate content.
  • FaceSwap is not for changing faces without consent or with the intent of hiding its use.
  • FaceSwap is not for any illicit, unethical, or questionable purposes.
  • FaceSwap exists to experiment and discover AI techniques, for social or political commentary, for movies, and for any number of ethical and reasonable uses.

We are very troubled by the fact that FaceSwap can be used for unethical and disreputable things. However, we support the development of tools and techniques that can be used ethically as well as provide education and experience in AI for anyone who wants to learn it hands-on. We will take a zero tolerance approach to anyone using this software for any unethical purposes and will actively discourage any such uses.

How To setup and run the project

FaceSwap is a Python program that will run on multiple Operating Systems including Windows, Linux, and MacOS.

See INSTALL.md for full installation instructions. You will need a modern GPU with CUDA support for best performance. Many AMD GPUs are supported through DirectML (Windows) and ROCm (Linux).

Overview

The project has multiple entry points. You will have to:

  • Gather photos and/or videos
  • Extract faces from your raw photos
  • Train a model on the faces extracted from the photos/videos
  • Convert your sources with the model

Check out USAGE.md for more detailed instructions.

Extract

From your setup folder, run python faceswap.py extract. This will take photos from src folder and extract faces into extract folder.

Train

From your setup folder, run python faceswap.py train. This will take photos from two folders containing pictures of both faces and train a model that will be saved inside the models folder.

Convert

From your setup folder, run python faceswap.py convert. This will take photos from original folder and apply new faces into modified folder.

GUI

Alternatively, you can run the GUI by running python faceswap.py gui

General notes:

  • All of the scripts mentioned have -h/--help options with arguments that they will accept. You're smart, you can figure out how this works, right?!

NB: there is a conversion tool for video. This can be accessed by running python tools.py effmpeg -h. Alternatively, you can use ffmpeg to convert video into photos, process images, and convert images back to the video.

Some tips:

Reusing existing models will train much faster than starting from nothing. If there is not enough training data, start with someone who looks similar, then switch the data.

Help I need support!

Discord Server

Your best bet is to join the FaceSwap Discord server where there are plenty of users willing to help. Please note that, like this repo, this is a SFW Server!

FaceSwap Forum

Alternatively, you can post questions in the FaceSwap Forum. Please do not post general support questions in this repo as they are liable to be deleted without response.

Donate

The developers work tirelessly to improve and develop FaceSwap. Many hours have been put in to provide the software as it is today, but this is an extremely time-consuming process with no financial reward. If you enjoy using the software, please consider donating to the devs, so they can spend more time implementing improvements.

Patreon

The best way to support us is through our Patreon page:

become-a-patron

One time Donations

Alternatively you can give a one off donation to any of our Devs:

@torzdf

There is very little FaceSwap code that hasn't been touched by torzdf. He is responsible for implementing the GUI, FAN aligner, MTCNN detector and porting the Villain, DFL-H128 and DFaker models to FaceSwap, as well as significantly improving many areas of the code.

Bitcoin: bc1qpm22suz59ylzk0j7qk5e4c7cnkjmve2rmtrnc6

Ethereum: 0xd3e954dC241B87C4E8E1A801ada485DC1d530F01

Monero: 45dLrtQZ2pkHizBpt3P3yyJKkhcFHnhfNYPMSnz3yVEbdWm3Hj6Kr5TgmGAn3Far8LVaQf1th2n3DJVTRkfeB5ZkHxWozSX

Paypal: torzdf

@andenixa

Creator of the Unbalanced and OHR models, as well as expanding various capabilities within the training process. Andenixa is currently working on new models and will take requests for donations.

Paypal: andenixa

How to contribute

For people interested in the generative models

  • Go to the 'faceswap-model' to discuss/suggest/commit alternatives to the current algorithm.

For devs

  • Read this README entirely
  • Fork the repo
  • Play with it
  • Check issues with the 'dev' tag
  • For devs more interested in computer vision and openCV, look at issues with the 'opencv' tag. Also feel free to add your own alternatives/improvements

For non-dev advanced users

  • Read this README entirely
  • Clone the repo
  • Play with it
  • Check issues with the 'advuser' tag
  • Also go to the 'faceswap Forum' and help others.

For end-users

  • Get the code here and play with it if you can
  • You can also go to the faceswap Forum and help or get help from others.
  • Be patient. This is a relatively new technology for developers as well. Much effort is already being put into making this program easy to use for the average user. It just takes time!
  • Notice Any issue related to running the code has to be opened in the faceswap Forum!

For haters

Sorry, no time for that.

About github.com/deepfakes

What is this repo?

It is a community repository for active users.

Why this repo?

The joshua-wu repo seems not active. Simple bugs like missing http:// in front of urls have not been solved since days.

Why is it named 'deepfakes' if it is not /u/deepfakes?

  1. Because a typosquat would have happened sooner or later as project grows
  2. Because we wanted to recognize the original author
  3. Because it will better federate contributors and users

What if /u/deepfakes feels bad about that?

This is a friendly typosquat, and it is fully dedicated to the project. If /u/deepfakes wants to take over this repo/user and drive the project, he is welcomed to do so (Raise an issue, and he will be contacted on Reddit). Please do not send /u/deepfakes messages for help with the code you find here.

About machine learning

How does a computer know how to recognize/shape faces? How does machine learning work? What is a neural network?

It's complicated. Here's a good video that makes the process understandable: How Machines Learn

Here's a slightly more in depth video that tries to explain the basic functioning of a neural network: How Machines Learn

tl;dr: training data + trial and error