Convert Figma logo to code with AI

resemble-ai logoResemblyzer

A python package to analyze and compare voices with deep learning

2,725
421
2,725
40

Top Related Projects

Clone a voice in 5 seconds to generate arbitrary speech in real-time

9,296

:robot: :speech_balloon: Deep learning for Text to Speech (Discussion forum: https://discourse.mozilla.org/c/tts)

Tacotron 2 - PyTorch implementation with faster-than-realtime inference

1,032

A fast local neural text to speech engine for Mycroft

Quick Overview

Resemblyzer is a Python library for voice similarity analysis and speaker diarization. It provides tools to extract speaker embeddings from audio files and compare them to determine speaker similarity or identify unique speakers in multi-speaker audio.

Pros

  • Easy-to-use API for voice similarity analysis
  • Supports both pre-trained and custom voice models
  • Efficient implementation for real-time processing
  • Works well with various audio formats and qualities

Cons

  • Limited documentation and examples
  • Requires some understanding of voice processing concepts
  • May have accuracy issues with certain accents or languages
  • Dependency on specific versions of libraries can cause compatibility issues

Code Examples

  1. Loading an audio file and extracting speaker embeddings:
from resemblyzer import VoiceEncoder, preprocess_wav
from pathlib import Path

encoder = VoiceEncoder()
wav = preprocess_wav(Path("path/to/audio.wav"))
embedding = encoder.embed_utterance(wav)
  1. Comparing two speakers:
from resemblyzer import VoiceEncoder, preprocess_wav
from pathlib import Path
from scipy.spatial.distance import cosine

encoder = VoiceEncoder()
wav1 = preprocess_wav(Path("speaker1.wav"))
wav2 = preprocess_wav(Path("speaker2.wav"))

embed1 = encoder.embed_utterance(wav1)
embed2 = encoder.embed_utterance(wav2)

similarity = 1 - cosine(embed1, embed2)
print(f"Speaker similarity: {similarity:.2f}")
  1. Performing speaker diarization:
from resemblyzer import VoiceEncoder, preprocess_wav
from pathlib import Path
from resemblyzer.diarize import diarize

encoder = VoiceEncoder()
wav = preprocess_wav(Path("multi_speaker_audio.wav"))
_, speaker_labels = diarize(wav, encoder)

print("Speaker labels:", speaker_labels)

Getting Started

To get started with Resemblyzer, follow these steps:

  1. Install the library:
pip install resemblyzer
  1. Import the necessary modules:
from resemblyzer import VoiceEncoder, preprocess_wav
from pathlib import Path
  1. Load an audio file and extract embeddings:
encoder = VoiceEncoder()
wav = preprocess_wav(Path("path/to/audio.wav"))
embedding = encoder.embed_utterance(wav)
  1. Use the embeddings for voice similarity analysis or speaker diarization as shown in the code examples above.

Competitor Comparisons

Clone a voice in 5 seconds to generate arbitrary speech in real-time

Pros of Real-Time-Voice-Cloning

  • Offers end-to-end voice cloning capabilities, including synthesis
  • Provides a user-friendly interface for real-time voice cloning
  • Includes pre-trained models for immediate use

Cons of Real-Time-Voice-Cloning

  • More complex setup and dependencies
  • Requires more computational resources
  • Less focused on speaker embedding, which is Resemblyzer's specialty

Code Comparison

Resemblyzer

from resemblyzer import VoiceEncoder, preprocess_wav
encoder = VoiceEncoder()
embed = encoder.embed_utterance(preprocess_wav("path/to/audio.wav"))

Real-Time-Voice-Cloning

from encoder import inference as encoder
from synthesizer.inference import Synthesizer
from vocoder import inference as vocoder

encoder.load_model("encoder.pt")
synthesizer = Synthesizer("synthesizer.pt")
vocoder.load_model("vocoder.pt")

The code snippets highlight the difference in focus between the two projects. Resemblyzer is more streamlined for speaker embedding, while Real-Time-Voice-Cloning involves multiple components for full voice cloning functionality.

9,296

:robot: :speech_balloon: Deep learning for Text to Speech (Discussion forum: https://discourse.mozilla.org/c/tts)

Pros of TTS

  • Comprehensive text-to-speech system with multiple models and voices
  • Active development and community support
  • Supports multiple languages and accents

Cons of TTS

  • More complex setup and usage compared to Resemblyzer
  • Requires more computational resources for training and inference

Code Comparison

TTS:

from TTS.utils.synthesizer import Synthesizer

synthesizer = Synthesizer(
    tts_checkpoint="path/to/model.pth",
    tts_config_path="path/to/config.json",
    vocoder_checkpoint="path/to/vocoder.pth",
    vocoder_config="path/to/vocoder_config.json"
)

wav = synthesizer.tts("Hello, world!")

Resemblyzer:

from resemblyzer import VoiceEncoder, preprocess_wav
from pathlib import Path

encoder = VoiceEncoder()
wav = preprocess_wav(Path("path/to/audio.wav"))
embedding = encoder.embed_utterance(wav)

TTS offers a complete text-to-speech pipeline, while Resemblyzer focuses on voice embedding and speaker verification. TTS provides more flexibility in terms of voice generation and customization, but requires more setup and resources. Resemblyzer is simpler to use and more lightweight, making it ideal for voice analysis tasks, but lacks the full text-to-speech capabilities of TTS.

Tacotron 2 - PyTorch implementation with faster-than-realtime inference

Pros of Tacotron2

  • Generates high-quality speech synthesis with natural-sounding prosody
  • Supports multi-speaker synthesis and voice cloning capabilities
  • Backed by NVIDIA, with extensive documentation and community support

Cons of Tacotron2

  • Requires significant computational resources and training time
  • More complex architecture, potentially harder to implement and fine-tune
  • Primarily focused on text-to-speech, less versatile for other audio tasks

Code Comparison

Resemblyzer (voice embedding extraction):

from resemblyzer import VoiceEncoder, preprocess_wav
encoder = VoiceEncoder()
embed = encoder.embed_utterance(preprocess_wav("path/to/audio.wav"))

Tacotron2 (text-to-speech synthesis):

from tacotron2.hparams import create_hparams
from tacotron2.model import Tacotron2
hparams = create_hparams()
model = Tacotron2(hparams)
mel_outputs, mel_outputs_postnet, _, alignments = model.inference(text)

Key Differences

  • Resemblyzer focuses on voice embedding extraction for speaker verification and recognition tasks
  • Tacotron2 is primarily designed for end-to-end text-to-speech synthesis
  • Resemblyzer is more lightweight and easier to integrate into existing projects
  • Tacotron2 offers more advanced speech synthesis capabilities but requires more resources
1,032

A fast local neural text to speech engine for Mycroft

Pros of mimic3

  • Focuses on text-to-speech (TTS) synthesis, providing a complete TTS solution
  • Supports multiple languages and voices out of the box
  • Integrates well with the Mycroft AI ecosystem for voice assistants

Cons of mimic3

  • Larger project scope, potentially more complex to set up and use
  • May require more computational resources for TTS generation
  • Less focused on voice embedding and speaker verification tasks

Code Comparison

mimic3:

from mimic3_tts import Mimic3TTS

tts = Mimic3TTS(voice="en_US/vctk_low")
audio = tts.synthesize("Hello, world!")

Resemblyzer:

from resemblyzer import VoiceEncoder, preprocess_wav
from pathlib import Path

encoder = VoiceEncoder()
wav = preprocess_wav(Path("path/to/audio.wav"))
embedding = encoder.embed_utterance(wav)

Key Differences

Resemblyzer is primarily focused on voice embedding and speaker verification, while mimic3 is a full-fledged TTS system. Resemblyzer is more suitable for tasks involving voice analysis and comparison, whereas mimic3 is better suited for generating speech from text across multiple languages and voices.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

Resemblyzer allows you to derive a high-level representation of a voice through a deep learning model (referred to as the voice encoder). Given an audio file of speech, it creates a summary vector of 256 values (an embedding, often shortened to "embed" in this repo) that summarizes the characteristics of the voice spoken.

N.B.: this repo holds 100mb of audio data for demonstration purpose. To get the package alone, run pip install resemblyzer (python 3.5+ is required).

Demos

Speaker diarization: [Demo 02] recognize who is talking when with only a few seconds of reference audio per speaker:
(click the image for a video)

demo_02

Fake speech detection: [Demo 05] modest detection of fake speech by comparing the similarity of 12 unknown utterances (6 real ones, 6 fakes) against ground truth reference audio. Scores above the dashed line are predicted as real, so the model makes one error here.

demo_05

For reference, this is the fake video that achieved a high score.

Visualizing the manifold:
[Demo 03 - left] projecting the embeddings of 100 utterances (10 each from 10 speakers) in 2D space. The utterances from the same speakers form a tight cluster. With a trivial clustering algorithm, the speaker verification error rate for this example (with data unseen in training) would be 0%.
[Demo 04 - right] same as demo 03 but with 251 embeddings all from distinct speakers, highlighting that the model has learned on its own to identify the sex of the speaker.

demo_03_04

Cross-similarity: [Demo 01] comparing 10 utterances from 10 speakers against 10 other utterances from the same speakers.

demo_01

What can I do with this package?

Resemblyzer has many uses:

  • Voice similarity metric: compare different voices and get a value on how similar they sound. This leads to other applications:
    • Speaker verification: create a voice profile for a person from a few seconds of speech (5s - 30s) and compare it to that of new audio. Reject similarity scores below a threshold.
    • Speaker diarization: figure out who is talking when by comparing voice profiles with the continuous embedding of a multispeaker speech segment.
    • Fake speech detection: verify if some speech is legitimate or fake by comparing the similarity of possible fake speech to real speech.
  • High-level feature extraction: you can use the embeddings generated as feature vectors for machine learning or data analysis. This also leads to other applications:
    • Voice cloning: see this other project.
    • Component analysis: figure out accents, tones, prosody, gender, ... through a component analysis of the embeddings.
    • Virtual voices: create entirely new voice embeddings by sampling from a prior distribution.
  • Loss function: you can backpropagate through the voice encoder model and use it as a perceptual loss for your deep learning model! The voice encoder is written in PyTorch.

Resemblyzer is fast to execute (around 1000x real-time on a GTX 1080, with a minimum of 10ms for I/O operations), and can run both on CPU or GPU. It is robust to noise. It currently works best on English language only, but should still be able to perform somewhat decently on other languages.

Code example

This is a short example showing how to use Resemblyzer:

from resemblyzer import VoiceEncoder, preprocess_wav
from pathlib import Path
import numpy as np

fpath = Path("path_to_an_audio_file")
wav = preprocess_wav(fpath)

encoder = VoiceEncoder()
embed = encoder.embed_utterance(wav)
np.set_printoptions(precision=3, suppress=True)
print(embed)

I highly suggest giving a peek to the demos to understand how similarity is computed and to see practical usages of the voice encoder.

Additional info

Resemblyzer emerged as a side project of the Real-Time Voice Cloning repository. The pretrained model that comes with Resemblyzer is interchangeable with models trained in that repository, so feel free to finetune a model on new data and possibly new languages! The paper from which the voice encoder was implemented is Generalized End-To-End Loss for Speaker Verification (in which it is called the speaker encoder).