Convert Figma logo to code with AI

coqui-ai logoTTS

πŸΈπŸ’¬ - a deep learning toolkit for Text-to-Speech, battle-tested in research and production

33,278
4,030
33,278
76

Top Related Projects

9,217

:robot: :speech_balloon: Deep learning for Text to Speech (Discussion forum: https://discourse.mozilla.org/c/tts)

Tacotron 2 - PyTorch implementation with faster-than-realtime inference

2,124

WaveRNN Vocoder + TTS

WaveNet vocoder

Clone a voice in 5 seconds to generate arbitrary speech in real-time

DeepMind's Tacotron-2 Tensorflow implementation

Quick Overview

Coqui-ai/TTS is an open-source text-to-speech library that provides deep learning models for generating human-like speech from text input. It offers a variety of pre-trained models, supports multiple languages, and allows for fine-tuning and custom model creation.

Pros

  • Wide range of pre-trained models for different languages and accents
  • Supports both GPU and CPU inference
  • Extensible architecture for adding new models and voices
  • Active community and regular updates

Cons

  • Requires significant computational resources for training custom models
  • Some advanced features may have a steep learning curve
  • Documentation can be inconsistent or outdated in some areas
  • Limited support for low-resource languages

Code Examples

  1. Basic text-to-speech synthesis:
from TTS.api import TTS

# Initialize TTS with a pre-trained model
tts = TTS(model_name="tts_models/en/ljspeech/tacotron2-DDC")

# Generate speech from text
tts.tts_to_file(text="Hello, world!", file_path="output.wav")
  1. Using a different language model:
from TTS.api import TTS

# Initialize TTS with a Spanish model
tts = TTS(model_name="tts_models/es/mai/tacotron2-DDC")

# Generate Spanish speech
tts.tts_to_file(text="Hola, mundo!", file_path="output_es.wav")
  1. Customizing speaker and style:
from TTS.api import TTS

# Initialize TTS with a multi-speaker model
tts = TTS(model_name="tts_models/en/vctk/vits")

# Generate speech with a specific speaker and style
tts.tts_to_file(
    text="This is a custom voice sample.",
    file_path="output_custom.wav",
    speaker="p225",  # VCTK speaker ID
    style_wav="path/to/reference_audio.wav"
)

Getting Started

To get started with Coqui-ai/TTS, follow these steps:

  1. Install the library:
pip install TTS
  1. Use a pre-trained model for inference:
from TTS.api import TTS

# List available models
print(TTS().list_models())

# Initialize TTS with a chosen model
tts = TTS(model_name="tts_models/en/ljspeech/fast_pitch")

# Generate speech
tts.tts_to_file(text="Welcome to Coqui TTS!", file_path="welcome.wav")

For more advanced usage, including custom model training and fine-tuning, refer to the official documentation and examples in the GitHub repository.

Competitor Comparisons

9,217

:robot: :speech_balloon: Deep learning for Text to Speech (Discussion forum: https://discourse.mozilla.org/c/tts)

Pros of TTS (Mozilla)

  • Longer development history and potentially more stable codebase
  • Larger community and more extensive documentation
  • Supports a wider range of legacy models and techniques

Cons of TTS (Mozilla)

  • Less frequent updates and potentially outdated features
  • May lack some cutting-edge models and techniques
  • Could have performance limitations compared to newer implementations

Code Comparison

TTS (Mozilla):

from TTS.utils.synthesizer import Synthesizer

synthesizer = Synthesizer(
    tts_checkpoint="path/to/model.pth",
    tts_config_path="path/to/config.json",
    vocoder_checkpoint="path/to/vocoder.pth",
    vocoder_config="path/to/vocoder_config.json"
)

TTS (Coqui):

from TTS.api import TTS

tts = TTS(model_name="tts_models/en/ljspeech/tacotron2-DDC")
tts.tts_to_file(text="Hello world!", file_path="output.wav")

The Coqui TTS implementation offers a more streamlined API, making it easier to use out-of-the-box. Mozilla's TTS requires more manual configuration but provides greater flexibility for advanced users.

Tacotron 2 - PyTorch implementation with faster-than-realtime inference

Pros of Tacotron2

  • Developed by NVIDIA, known for high-performance AI solutions
  • Focuses specifically on Tacotron 2 architecture, potentially offering deeper optimization
  • Includes pre-trained models for quick deployment

Cons of Tacotron2

  • Limited to Tacotron 2 architecture, less flexible than TTS
  • Less active development and community support compared to TTS
  • Fewer features and voice options compared to the more comprehensive TTS project

Code Comparison

TTS:

from TTS.api import TTS

tts = TTS(model_name="tts_models/en/ljspeech/tacotron2-DDC")
tts.tts_to_file(text="Hello world!", file_path="output.wav")

Tacotron2:

from tacotron2_model import Tacotron2
from text import text_to_sequence

model = Tacotron2()
text = "Hello world!"
sequence = text_to_sequence(text, ['english_cleaners'])
mel_outputs, mel_outputs_postnet, _, alignments = model.inference(sequence)

Summary

TTS offers a more comprehensive and flexible text-to-speech solution with broader language and model support, while Tacotron2 provides a focused implementation of the Tacotron 2 architecture. TTS has a more user-friendly API and active community, whereas Tacotron2 might appeal to those specifically interested in the Tacotron 2 model or NVIDIA's implementation.

2,124

WaveRNN Vocoder + TTS

Pros of WaveRNN

  • Focused specifically on waveform generation, potentially offering more specialized and optimized performance in this area
  • Lighter weight and potentially faster training times for certain use cases
  • May be easier to integrate into existing projects due to its more focused scope

Cons of WaveRNN

  • Less comprehensive feature set compared to TTS, which offers a full end-to-end TTS solution
  • May require additional components or preprocessing steps to achieve full TTS functionality
  • Less active development and community support compared to the more recently updated TTS

Code Comparison

WaveRNN:

model = Model(rnn_dims=512, fc_dims=512, bits=9, pad=2,
              upsample_factors=(5,5,8), feat_dims=80,
              compute_dims=128, res_out_dims=128, res_blocks=10)

TTS:

from TTS.api import TTS

tts = TTS(model_name="tts_models/en/ljspeech/tacotron2-DDC")
tts.tts_to_file(text="Hello world!", file_path="output.wav")

The code snippets highlight the difference in approach: WaveRNN focuses on low-level model configuration, while TTS provides a high-level API for easy text-to-speech generation.

WaveNet vocoder

Pros of wavenet_vocoder

  • Focused specifically on WaveNet vocoder implementation
  • Lightweight and easier to understand for those interested in WaveNet architecture
  • Provides a clear example of neural vocoder implementation

Cons of wavenet_vocoder

  • Less actively maintained compared to TTS
  • Limited to WaveNet vocoder, while TTS offers multiple vocoder options
  • Fewer features and less comprehensive than TTS for end-to-end text-to-speech tasks

Code Comparison

wavenet_vocoder:

model = build_model()
optimizer = optim.Adam(model.parameters())
for epoch in range(num_epochs):
    for batch in data_loader:
        loss = model(batch)
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()

TTS:

model = TTSModel()
trainer = Trainer(model, config)
trainer.fit()
trainer.test()
synthesizer = Synthesizer(model)
wav = synthesizer.tts("Hello, world!")

The wavenet_vocoder code shows a more basic training loop, while TTS abstracts much of the complexity into higher-level classes and methods, making it easier to use for end-to-end text-to-speech tasks.

Clone a voice in 5 seconds to generate arbitrary speech in real-time

Pros of Real-Time-Voice-Cloning

  • Focuses specifically on real-time voice cloning, offering a more specialized solution
  • Provides a user-friendly interface for voice cloning demonstrations
  • Implements a pre-trained model for quick voice cloning without extensive training

Cons of Real-Time-Voice-Cloning

  • Less actively maintained compared to TTS
  • Limited to voice cloning functionality, while TTS offers a broader range of text-to-speech capabilities
  • May require more setup and dependencies for real-time processing

Code Comparison

Real-Time-Voice-Cloning:

from encoder.params_model import model_embedding_size as speaker_embedding_size
from utils.argutils import print_args
from synthesizer.inference import Synthesizer
from encoder import inference as encoder
from vocoder import inference as vocoder

TTS:

from TTS.api import TTS
tts = TTS(model_name="tts_models/en/ljspeech/tacotron2-DDC")
tts.tts_to_file(text="Hello world!", file_path="output.wav")

The Real-Time-Voice-Cloning code snippet shows the import of various components for voice cloning, while the TTS code demonstrates a simpler API for text-to-speech conversion. TTS offers a more straightforward approach for general text-to-speech tasks, while Real-Time-Voice-Cloning provides more granular control over the voice cloning process.

DeepMind's Tacotron-2 Tensorflow implementation

Pros of Tacotron-2

  • Focused implementation of the Tacotron 2 architecture
  • Includes detailed documentation on model training and inference
  • Supports both Tacotron 1 and 2 architectures

Cons of Tacotron-2

  • Less actively maintained compared to TTS
  • Limited pre-trained models and language support
  • Fewer features and customization options

Code Comparison

Tacotron-2:

def create_hparams(hparams_string=None, verbose=False):
    hparams = tf.contrib.training.HParams(
        # Comma-separated list of cleaners to run on text prior to training and eval.
        cleaners='english_cleaners',
        # Audio
        num_mels=80,
        num_freq=1025,
        sample_rate=20000,
        frame_length_ms=50,
        frame_shift_ms=12.5,
        # ...
    )

TTS:

class BaseDatasetConfig(Coqpit):
    """Base dataset configuration class."""

    formatter: str = "ljspeech"
    dataset_name: str = ""
    path: str = ""
    meta_file_train: str = ""
    meta_file_val: str = ""
    # ...

The code snippets show different approaches to configuration. Tacotron-2 uses TensorFlow's HParams, while TTS employs a custom configuration class based on Coqpit. TTS offers a more modular and extensible approach to dataset and model configuration.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

🐸Coqui.ai News

  • Γ°ΒŸΒ“Β£ ғTTSv2 is here with 16 languages and better performance across the board.
  • Γ°ΒŸΒ“Β£ ғTTS fine-tuning code is out. Check the example recipes.
  • Γ°ΒŸΒ“Β£ ғTTS can now stream with <200ms latency.
  • Γ°ΒŸΒ“Β£ ғTTS, our production TTS model that can speak 13 languages, is released Blog Post, Demo, Docs
  • Γ°ΒŸΒ“Β£ 🐢Bark is now available for inference with unconstrained voice cloning. Docs
  • Γ°ΒŸΒ“Β£ You can use ~1100 Fairseq models with 🐸TTS.
  • Γ°ΒŸΒ“Β£ 🐸TTS now supports 🐒Tortoise with faster inference. Docs

🐸TTS is a library for advanced Text-to-Speech generation.

Γ°ΒŸΒšΒ€ Pretrained models in +1100 languages.

Γ°ΒŸΒ›Β Γ―ΒΈΒ Tools for training new models and fine-tuning existing models in any language.

Γ°ΒŸΒ“Βš Utilities for dataset analysis and curation.


Discord License PyPI version Covenant Downloads DOI

GithubActions GithubActions GithubActions GithubActions GithubActions GithubActions GithubActions GithubActions GithubActions GithubActions GithubActions Docs


Γ°ΒŸΒ’Β¬ Where to ask questions

Please use our dedicated channels for questions and discussion. Help is much more valuable if it's shared publicly so that more people can benefit from it.

TypePlatforms
🚨 Bug ReportsGitHub Issue Tracker
🎁 Feature Requests & IdeasGitHub Issue Tracker
Γ°ΒŸΒ‘Β©Γ’Β€ΒΓ°ΒŸΒ’Β» Usage QuestionsGitHub Discussions
Γ°ΒŸΒ—Β― General DiscussionGitHub Discussions or Discord

Γ°ΒŸΒ”Β— Links and Resources

TypeLinks
Γ°ΒŸΒ’ΒΌ DocumentationReadTheDocs
Γ°ΒŸΒ’ΒΎ InstallationTTS/README.md
Γ°ΒŸΒ‘Β©Γ’Β€ΒΓ°ΒŸΒ’Β» ContributingCONTRIBUTING.md
Γ°ΒŸΒ“ΒŒ Road MapMain Development Plans
Γ°ΒŸΒšΒ€ Released ModelsTTS Releases and Experimental Models
Γ°ΒŸΒ“Β° PapersTTS Papers

ðŸΒ₯Β‡ TTS Performance

Underlined "TTS*" and "Judy*" are internal 🐸TTS models that are not released open-source. They are here to show the potential. Models prefixed with a dot (.Jofish .Abe and .Janice) are real human voices.

Features

  • High-performance Deep Learning models for Text2Speech tasks.
    • Text2Spec models (Tacotron, Tacotron2, Glow-TTS, SpeedySpeech).
    • Speaker Encoder to compute speaker embeddings efficiently.
    • Vocoder models (MelGAN, Multiband-MelGAN, GAN-TTS, ParallelWaveGAN, WaveGrad, WaveRNN)
  • Fast and efficient model training.
  • Detailed training logs on the terminal and Tensorboard.
  • Support for Multi-speaker TTS.
  • Efficient, flexible, lightweight but feature complete Trainer API.
  • Released and ready-to-use models.
  • Tools to curate Text2Speech datasets underdataset_analysis.
  • Utilities to use and test your models.
  • Modular (but not too much) code base enabling easy implementation of new ideas.

Model Implementations

Spectrogram models

End-to-End Models

Attention Methods

  • Guided Attention: paper
  • Forward Backward Decoding: paper
  • Graves Attention: paper
  • Double Decoder Consistency: blog
  • Dynamic Convolutional Attention: paper
  • Alignment Network: paper

Speaker Encoder

Vocoders

Voice Conversion

You can also help us implement more models.

Installation

🐸TTS is tested on Ubuntu 18.04 with python >= 3.9, < 3.12..

If you are only interested in synthesizing speech with the released 🐸TTS models, installing from PyPI is the easiest option.

pip install TTS

If you plan to code or train models, clone 🐸TTS and install it locally.

git clone https://github.com/coqui-ai/TTS
pip install -e .[all,dev,notebooks]  # Select the relevant extras

If you are on Ubuntu (Debian), you can also run following commands for installation.

$ make system-deps  # intended to be used on Ubuntu (Debian). Let us know if you have a different OS.
$ make install

If you are on Windows, Γ°ΒŸΒ‘Β‘@GuyPaddock wrote installation instructions here.

Docker Image

You can also try TTS without install with the docker image. Simply run the following command and you will be able to run TTS without installing it.

docker run --rm -it -p 5002:5002 --entrypoint /bin/bash ghcr.io/coqui-ai/tts-cpu
python3 TTS/server/server.py --list_models #To get the list of available models
python3 TTS/server/server.py --model_name tts_models/en/vctk/vits # To start a server

You can then enjoy the TTS server here More details about the docker images (like GPU support) can be found here

Synthesizing speech by 🐸TTS

🐍 Python API

Running a multi-speaker and multi-lingual model

import torch
from TTS.api import TTS

# Get device
device = "cuda" if torch.cuda.is_available() else "cpu"

# List available 🐸TTS models
print(TTS().list_models())

# Init TTS
tts = TTS("tts_models/multilingual/multi-dataset/xtts_v2").to(device)

# Run TTS
# ҝ— Since this model is multi-lingual voice cloning model, we must set the target speaker_wav and language
# Text to speech list of amplitude values as output
wav = tts.tts(text="Hello world!", speaker_wav="my/cloning/audio.wav", language="en")
# Text to speech to a file
tts.tts_to_file(text="Hello world!", speaker_wav="my/cloning/audio.wav", language="en", file_path="output.wav")

Running a single speaker model

# Init TTS with the target model name
tts = TTS(model_name="tts_models/de/thorsten/tacotron2-DDC", progress_bar=False).to(device)

# Run TTS
tts.tts_to_file(text="Ich bin eine Testnachricht.", file_path=OUTPUT_PATH)

# Example voice cloning with YourTTS in English, French and Portuguese
tts = TTS(model_name="tts_models/multilingual/multi-dataset/your_tts", progress_bar=False).to(device)
tts.tts_to_file("This is voice cloning.", speaker_wav="my/cloning/audio.wav", language="en", file_path="output.wav")
tts.tts_to_file("C'est le clonage de la voix.", speaker_wav="my/cloning/audio.wav", language="fr-fr", file_path="output.wav")
tts.tts_to_file("Isso é clonagem de voz.", speaker_wav="my/cloning/audio.wav", language="pt-br", file_path="output.wav")

Example voice conversion

Converting the voice in source_wav to the voice of target_wav

tts = TTS(model_name="voice_conversion_models/multilingual/vctk/freevc24", progress_bar=False).to("cuda")
tts.voice_conversion_to_file(source_wav="my/source.wav", target_wav="my/target.wav", file_path="output.wav")

Example voice cloning together with the voice conversion model.

This way, you can clone voices by using any model in 🐸TTS.


tts = TTS("tts_models/de/thorsten/tacotron2-DDC")
tts.tts_with_vc_to_file(
    "Wie sage ich auf Italienisch, dass ich dich liebe?",
    speaker_wav="target/speaker.wav",
    file_path="output.wav"
)

Example text to speech using Fairseq models in ~1100 languages 🀯.

For Fairseq models, use the following name format: tts_models/<lang-iso_code>/fairseq/vits. You can find the language ISO codes here and learn about the Fairseq models here.

# TTS with on the fly voice conversion
api = TTS("tts_models/deu/fairseq/vits")
api.tts_with_vc_to_file(
    "Wie sage ich auf Italienisch, dass ich dich liebe?",
    speaker_wav="target/speaker.wav",
    file_path="output.wav"
)

Command-line tts

Synthesize speech on command line.

You can either use your trained model or choose a model from the provided list.

If you don't specify any models, then it uses LJSpeech based English model.

Single Speaker Models

  • List provided models:

    $ tts --list_models
    
  • Get model info (for both tts_models and vocoder_models):

    • Query by type/name: The model_info_by_name uses the name as it from the --list_models.

      $ tts --model_info_by_name "<model_type>/<language>/<dataset>/<model_name>"
      

      For example:

      $ tts --model_info_by_name tts_models/tr/common-voice/glow-tts
      $ tts --model_info_by_name vocoder_models/en/ljspeech/hifigan_v2
      
    • Query by type/idx: The model_query_idx uses the corresponding idx from --list_models.

      $ tts --model_info_by_idx "<model_type>/<model_query_idx>"
      

      For example:

      $ tts --model_info_by_idx tts_models/3
      
    • Query info for model info by full name:

      $ tts --model_info_by_name "<model_type>/<language>/<dataset>/<model_name>"
      
  • Run TTS with default models:

    $ tts --text "Text for TTS" --out_path output/path/speech.wav
    
  • Run TTS and pipe out the generated TTS wav file data:

    $ tts --text "Text for TTS" --pipe_out --out_path output/path/speech.wav | aplay
    
  • Run a TTS model with its default vocoder model:

    $ tts --text "Text for TTS" --model_name "<model_type>/<language>/<dataset>/<model_name>" --out_path output/path/speech.wav
    

    For example:

    $ tts --text "Text for TTS" --model_name "tts_models/en/ljspeech/glow-tts" --out_path output/path/speech.wav
    
  • Run with specific TTS and vocoder models from the list:

    $ tts --text "Text for TTS" --model_name "<model_type>/<language>/<dataset>/<model_name>" --vocoder_name "<model_type>/<language>/<dataset>/<model_name>" --out_path output/path/speech.wav
    

    For example:

    $ tts --text "Text for TTS" --model_name "tts_models/en/ljspeech/glow-tts" --vocoder_name "vocoder_models/en/ljspeech/univnet" --out_path output/path/speech.wav
    
  • Run your own TTS model (Using Griffin-Lim Vocoder):

    $ tts --text "Text for TTS" --model_path path/to/model.pth --config_path path/to/config.json --out_path output/path/speech.wav
    
  • Run your own TTS and Vocoder models:

    $ tts --text "Text for TTS" --model_path path/to/model.pth --config_path path/to/config.json --out_path output/path/speech.wav
        --vocoder_path path/to/vocoder.pth --vocoder_config_path path/to/vocoder_config.json
    

Multi-speaker Models

  • List the available speakers and choose a <speaker_id> among them:

    $ tts --model_name "<language>/<dataset>/<model_name>"  --list_speaker_idxs
    
  • Run the multi-speaker TTS model with the target speaker ID:

    $ tts --text "Text for TTS." --out_path output/path/speech.wav --model_name "<language>/<dataset>/<model_name>"  --speaker_idx <speaker_id>
    
  • Run your own multi-speaker TTS model:

    $ tts --text "Text for TTS" --out_path output/path/speech.wav --model_path path/to/model.pth --config_path path/to/config.json --speakers_file_path path/to/speaker.json --speaker_idx <speaker_id>
    

Voice Conversion Models

$ tts --out_path output/path/speech.wav --model_name "<language>/<dataset>/<model_name>" --source_wav <path/to/speaker/wav> --target_wav <path/to/reference/wav>

Directory Structure

|- notebooks/       (Jupyter Notebooks for model evaluation, parameter selection and data analysis.)
|- utils/           (common utilities.)
|- TTS
    |- bin/             (folder for all the executables.)
      |- train*.py                  (train your target model.)
      |- ...
    |- tts/             (text to speech models)
        |- layers/          (model layer definitions)
        |- models/          (model definitions)
        |- utils/           (model specific utilities.)
    |- speaker_encoder/ (Speaker Encoder models.)
        |- (same)
    |- vocoder/         (Vocoder models.)
        |- (same)