Convert Figma logo to code with AI

alievk logoavatarify-python

Avatars for Zoom, Skype and other video-conferencing apps.

16,226
3,967
16,226
297

Top Related Projects

DeepFaceLab is the leading software for creating deepfakes.

This repository contains the source code for the paper First Order Motion Model for Image Animation

51,149

Deepfakes Software For All

10,475

This repository contains the codes of "A Lip Sync Expert Is All You Need for Speech to Lip Generation In the Wild", published at ACM Multimedia 2020. For HD commercial model, please try out Sync Labs

A real-time approach for mapping all human pixels of 2D RGB images to a 3D surface-based model of the body

Quick Overview

Avatarify-python is an open-source project that allows users to create AI-powered avatars for video conferencing and streaming. It uses deep learning techniques to animate a still image of a person, making it appear as if they are speaking and moving in real-time during video calls.

Pros

  • Enables creation of realistic, animated avatars from a single image
  • Supports various video conferencing platforms (Zoom, Skype, etc.)
  • Offers both CPU and GPU acceleration for improved performance
  • Provides a user-friendly interface for non-technical users

Cons

  • Requires significant computational resources for optimal performance
  • May have occasional glitches or unnatural movements in the generated avatars
  • Setup process can be complex for some users, especially on certain operating systems
  • Limited customization options for advanced users

Code Examples

As Avatarify-python is primarily an application rather than a code library, there are no specific code examples to showcase. The project is designed to be used as a standalone application with a graphical user interface.

Getting Started

To get started with Avatarify-python:

  1. Clone the repository:

    git clone https://github.com/alievk/avatarify-python.git
    
  2. Install dependencies:

    cd avatarify-python
    pip install -r requirements.txt
    
  3. Download the pre-trained model:

    sh download_data.sh
    
  4. Run the application:

    python cam_fomm.py --config fomm/config/vox-adv-256.yaml --checkpoint vox-adv-cpk.pth.tar --cam 0 --relative --adapt_scale
    

For detailed instructions and troubleshooting, refer to the project's README file on GitHub.

Competitor Comparisons

DeepFaceLab is the leading software for creating deepfakes.

Pros of DeepFaceLab

  • More comprehensive and versatile for deepfake creation
  • Supports a wider range of face manipulation techniques
  • Offers more advanced training options and fine-tuning capabilities

Cons of DeepFaceLab

  • Steeper learning curve and more complex setup process
  • Requires more computational resources and time for training
  • Less suitable for real-time applications

Code Comparison

DeepFaceLab example (model initialization):

from core.leras import nn
model = nn.ModelBase(model_path, training=False)
model.build_for_run(["in_face:0", "in_pitch:0", "in_yaw:0"])

Avatarify example (face alignment):

from face_alignment import FaceAlignment, LandmarksType
fa = FaceAlignment(LandmarksType._2D, flip_input=True)
landmarks = fa.get_landmarks(frame)[0]

Both projects focus on face manipulation, but DeepFaceLab is more oriented towards creating high-quality deepfakes through extensive training, while Avatarify is designed for real-time face swapping in video calls. DeepFaceLab offers more control and customization options, making it suitable for professional-grade deepfake production. Avatarify, on the other hand, prioritizes ease of use and real-time performance, making it more accessible for casual users or those interested in quick face-swapping experiments.

This repository contains the source code for the paper First Order Motion Model for Image Animation

Pros of first-order-model

  • More versatile, capable of animating various objects beyond just faces
  • Provides a more comprehensive framework for motion transfer
  • Offers more detailed documentation and explanations of the underlying technology

Cons of first-order-model

  • Less user-friendly for non-technical users
  • Requires more setup and configuration
  • Not optimized specifically for real-time video conferencing applications

Code Comparison

first-order-model:

from modules.generator import OcclusionAwareGenerator
from modules.keypoint_detector import KPDetector
from animate import normalize_kp

generator = OcclusionAwareGenerator(**config['model_params']['generator_params'],
                                    **config['model_params']['common_params'])
kp_detector = KPDetector(**config['model_params']['kp_detector_params'],
                         **config['model_params']['common_params'])

avatarify-python:

from afy.predictor_local import PredictorLocal
from afy.arguments import opt

predictor = PredictorLocal(
    config_path=opt.config,
    checkpoint_path=opt.checkpoint,
    relative=opt.relative,
    adapt_movement_scale=opt.adapt_scale,
    device=opt.device
)

The code snippets show that first-order-model uses separate modules for generation and keypoint detection, while avatarify-python employs a single predictor class for its functionality. This reflects the more general-purpose nature of first-order-model compared to the specialized focus of avatarify-python on facial animation for video conferencing.

51,149

Deepfakes Software For All

Pros of Faceswap

  • More comprehensive and feature-rich, offering a wider range of face swapping capabilities
  • Larger community and more extensive documentation, making it easier for users to get started and troubleshoot issues
  • Supports both photo and video face swapping, providing more versatility

Cons of Faceswap

  • Steeper learning curve due to its complexity and extensive features
  • Requires more computational resources, which may be challenging for users with less powerful hardware
  • Longer processing times for face swapping, especially with high-quality outputs

Code Comparison

Avatarify-python (simplified usage):

from afy.predictor_local import PredictorLocal
predictor = PredictorLocal(config_path='config.yaml')
result = predictor.predict(frame)

Faceswap (simplified usage):

from lib.cli import FullHelpArgumentParser
from plugins.train.model import Model
from plugins.extract.pipeline import Extractor
args = FullHelpArgumentParser().parse_args()
model = Model(args)
extractor = Extractor(args)

Both projects aim to achieve face manipulation, but Faceswap offers a more comprehensive toolkit with additional features and flexibility. Avatarify-python focuses on real-time avatar creation, while Faceswap provides a broader range of face swapping capabilities. The code snippets demonstrate that Faceswap has a more complex setup process, reflecting its wider feature set.

10,475

This repository contains the codes of "A Lip Sync Expert Is All You Need for Speech to Lip Generation In the Wild", published at ACM Multimedia 2020. For HD commercial model, please try out Sync Labs

Pros of Wav2Lip

  • Focuses specifically on lip-syncing, potentially offering better results for mouth movements
  • Supports both image and video inputs for the target face
  • Can generate realistic lip movements for a wider range of languages and accents

Cons of Wav2Lip

  • Limited to lip-syncing only, unlike Avatarify which offers full face animation
  • May require more computational resources for processing video inputs
  • Less suitable for real-time applications compared to Avatarify

Code Comparison

Wav2Lip example:

from inference import Wav2Lip
model = Wav2Lip()
result = model(face, audio)

Avatarify example:

from afy.predictor_local import PredictorLocal
predictor = PredictorLocal(config_path='config.yaml')
result = predictor.predict(frame)

Both projects use deep learning models for facial manipulation, but Wav2Lip focuses on lip-syncing while Avatarify offers broader facial animation capabilities. Wav2Lip may be more suitable for projects requiring precise lip movements, especially in multi-lingual contexts. Avatarify, on the other hand, provides a more comprehensive solution for full-face animation and is better suited for real-time applications like video conferencing.

A real-time approach for mapping all human pixels of 2D RGB images to a 3D surface-based model of the body

Pros of DensePose

  • More comprehensive human body pose estimation, including dense surface correspondence
  • Backed by Facebook's research team, potentially more robust and well-maintained
  • Offers a wider range of applications beyond face animation

Cons of DensePose

  • More complex to set up and use, requiring deeper understanding of computer vision concepts
  • Heavier computational requirements due to its comprehensive nature
  • Less focused on real-time applications compared to Avatarify

Code Comparison

Avatarify-python (simplified usage):

from afy.predictor_local import PredictorLocal
predictor = PredictorLocal(config_path='config.yaml')
avatar = predictor.get_frame_avatar()

DensePose (simplified usage):

from detectron2.config import get_cfg
from densepose import add_densepose_config
cfg = get_cfg()
add_densepose_config(cfg)
predictor = DefaultPredictor(cfg)
outputs = predictor(image)

Both projects focus on different aspects of computer vision. Avatarify-python is tailored for face animation and swapping, while DensePose provides a more comprehensive human body pose estimation. Avatarify is more user-friendly for specific face-related tasks, while DensePose offers broader applications but requires more expertise to implement effectively.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

Avatarify Python

Photorealistic avatars for video-conferencing.

Avatarify Python requires manually downloading and installing some dependencies, and is therefore best suited for users who have some experience with command line applications. Avatarify Desktop, which aims to be easier to install and use, is recommended for most users. If you still want to use Avatarify Python, proceed to the install instructions.

Based on First Order Motion Model.

News

  • 7 Mar 2021. Renamed project to Avatarify Python to distinguish it from other versions of Avatarify
  • 14 December 2020. Released Avatarify Desktop. Check it out here.
  • 11 July 2020. Added Docker support. Now you can run Avatarify from Docker on Linux. Thanks to mikaelhg and mintmaker for contribution!
  • 22 May 2020. Added Google Colab mode. Now you can run Avatarify on any computer without GPU!
  • 7 May 2020. Added remote GPU support for all platforms (based on mynameisfiber's solution). Demo. Deployment instructions.
  • 24 April 2020. Added Windows installation tutorial.
  • 17 April 2020. Created Slack community. Please join via invitation link.
  • 15 April 2020. Added StyleGAN-generated avatars. Just press Q and now you drive a person that never existed. Every time you push the button – new avatar is sampled.
  • 13 April 2020. Added Windows support (kudos to 9of9).

Avatarify apps

We have deployed Avatarify on iOS and Android devices using our proprietary inference engine. The iOS version features the Life mode for recording animations in real time. However, the Life mode is not available on Android devices due to the diversity of the devices we have to support.

<img src=docs/appstore-badge.png alt="drawing" height="40"/> <img src=docs/google-play-badge.png alt="drawing" height="40"/>