Convert Figma logo to code with AI

r9y9 logodeepvoice3_pytorch

PyTorch implementation of convolutional neural networks-based text-to-speech synthesis models

1,961
482
1,961
44

Top Related Projects

9,296

:robot: :speech_balloon: Deep learning for Text to Speech (Discussion forum: https://discourse.mozilla.org/c/tts)

Tacotron 2 - PyTorch implementation with faster-than-realtime inference

Clone a voice in 5 seconds to generate arbitrary speech in real-time

2,124

WaveRNN Vocoder + TTS

A TensorFlow implementation of Google's Tacotron speech synthesis with pre-trained model (unofficial)

DeepMind's Tacotron-2 Tensorflow implementation

Quick Overview

The r9y9/deepvoice3_pytorch repository is a PyTorch implementation of the DeepVoice3 text-to-speech (TTS) model, which is a neural network-based TTS system that can generate high-quality speech from text. The project aims to provide a flexible and easy-to-use TTS solution for researchers and developers.

Pros

  • Flexible and Customizable: The project provides a modular design that allows users to easily customize and extend the model to fit their specific needs.
  • High-Quality Output: The DeepVoice3 model is capable of generating high-quality, natural-sounding speech.
  • Actively Maintained: The project is actively maintained, with regular updates and bug fixes.
  • Extensive Documentation: The project has detailed documentation, including installation instructions, usage examples, and technical details.

Cons

  • Computational Complexity: Training the DeepVoice3 model can be computationally intensive, requiring significant hardware resources.
  • Limited Pre-Trained Models: The project only provides pre-trained models for a few languages, limiting its out-of-the-box usability for other languages.
  • Steep Learning Curve: Customizing and extending the model may require a good understanding of deep learning and TTS systems, which can be a barrier for some users.
  • Dependency on PyTorch: The project is built on PyTorch, which may be a limitation for users who prefer other deep learning frameworks.

Code Examples

Here are a few code examples from the r9y9/deepvoice3_pytorch repository:

  1. Loading a Pre-Trained Model:
from deepvoice3_pytorch import build_model, load_checkpoint

model = build_model()
load_checkpoint("path/to/checkpoint.pth.tar", model)

This code demonstrates how to load a pre-trained DeepVoice3 model from a checkpoint file.

  1. Generating Speech from Text:
import torch
from deepvoice3_pytorch.synthesis import wavegen

text = "Hello, this is a sample text-to-speech output."
mel, audio = wavegen(model, text, p=0.33, sigma=0.4, length=None)

This code shows how to use the wavegen function to generate speech audio from a given text input.

  1. Training the Model:
from deepvoice3_pytorch.train import train
from deepvoice3_pytorch.data_loader import get_data_loaders

train_loader, val_loader = get_data_loaders(...)
train(model, train_loader, val_loader, ...)

This code snippet demonstrates how to train the DeepVoice3 model using the provided train function and data loaders.

  1. Evaluating the Model:
from deepvoice3_pytorch.evaluate import evaluate
from deepvoice3_pytorch.data_loader import get_data_loaders

test_loader = get_data_loaders(...)["test"]
metrics = evaluate(model, test_loader)

This code shows how to evaluate the trained DeepVoice3 model using the evaluate function and the provided test data loader.

Getting Started

To get started with the r9y9/deepvoice3_pytorch project, follow these steps:

  1. Clone the repository:
git clone https://github.com/r9y9/deepvoice3_pytorch.git
  1. Install the required dependencies:
cd deepvoice3_pytorch
pip install -r requirements.txt
  1. Download the pre-trained model checkpoint:
wget https://github.com/r9y9/deepvoice3_pytorch/releases/download/v1.0.0/ljspeech.v1.pth.tar
  1. Load the pre-trained model and generate speech:
from deepvoice3_pytorch import build_model, load_checkpoint
from deepvoice3_pytorch.synthesis import wavegen

model = build_model()
load_checkpoint("ljspeech.v1.pth.tar", model)

text = "Hello, this is a sample text-to-speech output."
mel, audio =

Competitor Comparisons

9,296

:robot: :speech_balloon: Deep learning for Text to Speech (Discussion forum: https://discourse.mozilla.org/c/tts)

Pros of TTS

  • More comprehensive and actively maintained project with regular updates
  • Supports a wider range of TTS models and voice conversion techniques
  • Offers pre-trained models and easy-to-use inference APIs

Cons of TTS

  • Steeper learning curve due to its more complex architecture
  • Requires more computational resources for training and inference
  • Less focused on a single architecture, which may be overwhelming for beginners

Code Comparison

TTS:

from TTS.api import TTS

tts = TTS(model_name="tts_models/en/ljspeech/tacotron2-DDC")
tts.tts_to_file(text="Hello world!", file_path="output.wav")

deepvoice3_pytorch:

from synthesis import tts as _tts

waveform, alignment, spectrogram, mel = _tts(model, text, p=0)
librosa.output.write_wav(dst, waveform, sr=fs)

Summary

TTS offers a more feature-rich and actively maintained solution, while deepvoice3_pytorch provides a simpler, more focused implementation of the DeepVoice3 architecture. TTS is better suited for production environments and research, while deepvoice3_pytorch may be preferable for those specifically interested in the DeepVoice3 model or looking for a simpler starting point.

Tacotron 2 - PyTorch implementation with faster-than-realtime inference

Pros of Tacotron2

  • Officially supported by NVIDIA, ensuring high-quality implementation and optimization
  • Includes pre-trained models and extensive documentation for easier use
  • Leverages NVIDIA's GPU acceleration for faster training and inference

Cons of Tacotron2

  • More complex architecture, potentially requiring more computational resources
  • Less flexible for customization compared to DeepVoice3
  • Primarily focused on English language synthesis

Code Comparison

DeepVoice3:

# Simple model initialization
model = deepvoice3_pytorch.DeepVoice3(n_vocab=len(symbols),
                                     embed_dim=256,
                                     mel_dim=80,
                                     linear_dim=1025,
                                     r=4)

Tacotron2:

# More detailed model initialization
model = tacotron2.Tacotron2(n_mel_channels=80,
                            n_symbols=len(symbols),
                            symbols_embedding_dim=512,
                            encoder_kernel_size=5,
                            decoder_rnn_dim=1024,
                            prenet_dim=256,
                            max_decoder_steps=1000,
                            gate_threshold=0.5,
                            p_attention_dropout=0.1,
                            p_decoder_dropout=0.1)

The code comparison shows that Tacotron2 offers more fine-grained control over model parameters, while DeepVoice3 provides a simpler interface for quick setup.

Clone a voice in 5 seconds to generate arbitrary speech in real-time

Pros of Real-Time-Voice-Cloning

  • Offers real-time voice cloning capabilities
  • Includes a user-friendly toolbox for voice manipulation
  • Supports multi-speaker voice cloning

Cons of Real-Time-Voice-Cloning

  • More complex setup and dependencies
  • Potentially higher computational requirements
  • Less focus on low-resource languages

Code Comparison

Real-Time-Voice-Cloning:

def load_model(weights_fpath, verbose=True):
    model = SV2TTS(speakers_per_batch=1, verbose=verbose)
    model.load_state_dict(torch.load(weights_fpath))
    return model

deepvoice3_pytorch:

def tts(model, text, p=0, speaker_id=None, fast=False):
    if speaker_id is not None:
        speaker_id = torch.LongTensor([speaker_id])
    return model.forward(text, p=p, speaker_id=speaker_id, fast=fast)

The Real-Time-Voice-Cloning code focuses on loading a pre-trained model, while deepvoice3_pytorch's code snippet demonstrates the text-to-speech function. Real-Time-Voice-Cloning appears to use a more specialized model structure, while deepvoice3_pytorch offers more flexibility in terms of speaker selection and synthesis speed.

2,124

WaveRNN Vocoder + TTS

Pros of WaveRNN

  • Faster inference time due to its efficient architecture
  • Produces high-quality audio with fewer artifacts
  • More recent and actively maintained repository

Cons of WaveRNN

  • Requires more computational resources for training
  • Less flexibility in terms of voice customization
  • Steeper learning curve for beginners

Code Comparison

WaveRNN:

def forward(self, x, mels):
    bsize = x.size(0)
    h1, h2 = self.init_hidden(bsize)
    mels = self.upsample(mels)
    return self.wavernn(x, mels, h1, h2)

DeepVoice3:

def forward(self, inputs, targets=None, input_lengths=None):
    B = inputs.size(0)
    inputs = self.embed(inputs)
    encoder_outputs = self.encoder(inputs, input_lengths)
    mel_outputs, alignments = self.decoder(encoder_outputs, targets)
    return mel_outputs, alignments

The code snippets show that WaveRNN focuses on generating waveforms directly, while DeepVoice3 uses an encoder-decoder architecture for mel-spectrogram generation. WaveRNN's approach leads to faster inference but requires more complex training, while DeepVoice3 offers more flexibility in voice customization at the cost of slower generation.

A TensorFlow implementation of Google's Tacotron speech synthesis with pre-trained model (unofficial)

Pros of Tacotron

  • Simpler architecture, making it easier to understand and modify
  • Faster training time due to less complex model
  • More extensive documentation and community support

Cons of Tacotron

  • Generally lower audio quality compared to DeepVoice3
  • Less flexibility in terms of voice characteristics and styles
  • May require more data for comparable results

Code Comparison

Tacotron (model definition):

class Tacotron():
    def __init__(self):
        self.encoder = Encoder()
        self.decoder = Decoder()
        self.postnet = Postnet()

DeepVoice3 (model definition):

class DeepVoice3(nn.Module):
    def __init__(self, n_vocab, embed_dim, ...):
        super(DeepVoice3, self).__init__()
        self.embed = nn.Embedding(n_vocab, embed_dim)
        self.encoder = Encoder(embed_dim, ...)
        self.decoder = Decoder(embed_dim, ...)

Both repositories implement text-to-speech models, but DeepVoice3 offers more advanced features and potentially higher quality output at the cost of increased complexity. Tacotron may be a better choice for those new to TTS or with limited computational resources, while DeepVoice3 is suitable for those seeking state-of-the-art performance and willing to invest more time in understanding and training the model.

DeepMind's Tacotron-2 Tensorflow implementation

Pros of Tacotron-2

  • Implements the full Tacotron 2 architecture, including the WaveNet vocoder
  • Provides pre-trained models for quick experimentation
  • Offers more extensive documentation and usage examples

Cons of Tacotron-2

  • Less flexible for customization compared to DeepVoice3
  • May require more computational resources due to the WaveNet vocoder
  • Has fewer options for different model architectures

Code Comparison

Tacotron-2:

def create_model(hparams):
    return Tacotron2(hparams)

def train(model, train_loader, optimizer, criterion, device):
    model.train()
    for batch in train_loader:
        # Training loop implementation

DeepVoice3:

def create_model(n_speakers, speaker_embed_dim, preset):
    return MultiSpeakerTTSModel(n_speakers, speaker_embed_dim, preset)

def train(model, data_loader, optimizer, criterion, device):
    model.train()
    for batch in data_loader:
        # Training loop implementation

The code snippets show that both repositories use similar structures for model creation and training loops. However, Tacotron-2 focuses on a specific architecture, while DeepVoice3 allows for more customization with multi-speaker support and different presets.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

alt text

Deepvoice3_pytorch

PyPI Build Status Build status DOI

PyTorch implementation of convolutional networks-based text-to-speech synthesis models:

  1. arXiv:1710.07654: Deep Voice 3: Scaling Text-to-Speech with Convolutional Sequence Learning.
  2. arXiv:1710.08969: Efficiently Trainable Text-to-Speech System Based on Deep Convolutional Networks with Guided Attention.

Audio samples are available at https://r9y9.github.io/deepvoice3_pytorch/.

Folks

Online TTS demo

Notebooks supposed to be executed on https://colab.research.google.com are available:

Highlights

  • Convolutional sequence-to-sequence model with attention for text-to-speech synthesis
  • Multi-speaker and single speaker versions of DeepVoice3
  • Audio samples and pre-trained models
  • Preprocessor for LJSpeech (en), JSUT (jp) and VCTK datasets, as well as carpedm20/multi-speaker-tacotron-tensorflow compatible custom dataset (in JSON format)
  • Language-dependent frontend text processor for English and Japanese

Samples

Pretrained models

NOTE: pretrained models are not compatible to master. To be updated soon.

URLModelDataHyper paramtersGit commitSteps
linkDeepVoice3LJSpeechlinkabf0a21640k
linkNyankoLJSpeechbuilder=nyanko,preset=nyanko_ljspeechba59dc7585k
linkMulti-speaker DeepVoice3VCTKbuilder=deepvoice3_multispeaker,preset=deepvoice3_vctk0421749300k + 300k

To use pre-trained models, it's highly recommended that you are on the specific git commit noted above. i.e.,

git checkout ${commit_hash}

Then follow the "Synthesize from a checkpoint" section in the README of the specific git commit. Please notice that the latest development version of the repository may not work.

You could try for example:

# pretrained model (20180505_deepvoice3_checkpoint_step000640000.pth)
# hparams (20180505_deepvoice3_ljspeech.json)
git checkout 4357976
python synthesis.py --preset=20180505_deepvoice3_ljspeech.json \
  20180505_deepvoice3_checkpoint_step000640000.pth \
  sentences.txt \
  output_dir

Notes on hyper parameters

  • Default hyper parameters, used during preprocessing/training/synthesis stages, are turned for English TTS using LJSpeech dataset. You will have to change some of parameters if you want to try other datasets. See hparams.py for details.
  • builder specifies which model you want to use. deepvoice3, deepvoice3_multispeaker [1] and nyanko [2] are surpprted.
  • Hyper parameters described in DeepVoice3 paper for single speaker didn't work for LJSpeech dataset, so I changed a few things. Add dilated convolution, more channels, more layers and add guided attention loss, etc. See code for details. The changes are also applied for multi-speaker model.
  • Multiple attention layers are hard to learn. Empirically, one or two (first and last) attention layers seems enough.
  • With guided attention (see https://arxiv.org/abs/1710.08969), alignments get monotonic more quickly and reliably if we use multiple attention layers. With guided attention, I can confirm five attention layers get monotonic, though I cannot get speech quality improvements.
  • Binary divergence (described in https://arxiv.org/abs/1710.08969) seems stabilizes training particularly for deep (> 10 layers) networks.
  • Adam with step lr decay works. However, for deeper networks, I find Adam + noam's lr scheduler is more stable.

Requirements

  • Python >= 3.5
  • CUDA >= 8.0
  • PyTorch >= v1.0.0
  • nnmnkwii >= v0.0.11
  • MeCab (Japanese only)

Installation

Please install packages listed above first, and then

git clone https://github.com/r9y9/deepvoice3_pytorch && cd deepvoice3_pytorch
pip install -e ".[bin]"

Getting started

Preset parameters

There are many hyper parameters to be turned depends on what model and data you are working on. For typical datasets and models, parameters that known to work good (preset) are provided in the repository. See presets directory for details. Notice that

  1. preprocess.py
  2. train.py
  3. synthesis.py

accepts --preset=<json> optional parameter, which specifies where to load preset parameters. If you are going to use preset parameters, then you must use same --preset=<json> throughout preprocessing, training and evaluation. e.g.,

python preprocess.py --preset=presets/deepvoice3_ljspeech.json ljspeech ~/data/LJSpeech-1.0
python train.py --preset=presets/deepvoice3_ljspeech.json --data-root=./data/ljspeech

instead of

python preprocess.py ljspeech ~/data/LJSpeech-1.0
# warning! this may use different hyper parameters used at preprocessing stage
python train.py --preset=presets/deepvoice3_ljspeech.json --data-root=./data/ljspeech

0. Download dataset

1. Preprocessing

Usage:

python preprocess.py ${dataset_name} ${dataset_path} ${out_dir} --preset=<json>

Supported ${dataset_name}s are:

  • ljspeech (en, single speaker)
  • vctk (en, multi-speaker)
  • jsut (jp, single speaker)
  • nikl_m (ko, multi-speaker)
  • nikl_s (ko, single speaker)

Assuming you use preset parameters known to work good for LJSpeech dataset / DeepVoice3 and have data in ~/data/LJSpeech-1.0, then you can preprocess data by:

python preprocess.py --preset=presets/deepvoice3_ljspeech.json ljspeech ~/data/LJSpeech-1.0/ ./data/ljspeech

When this is done, you will see extracted features (mel-spectrograms and linear spectrograms) in ./data/ljspeech.

1-1. Building custom dataset. (using json_meta)

Building your own dataset, with metadata in JSON format (compatible with carpedm20/multi-speaker-tacotron-tensorflow) is currently supported. Usage:

python preprocess.py json_meta ${list-of-JSON-metadata-paths} ${out_dir} --preset=<json>

You may need to modify pre-existing preset JSON file, especially n_speakers. For english multispeaker, start with presets/deepvoice3_vctk.json.

Assuming you have dataset A (Speaker A) and dataset B (Speaker B), each described in the JSON metadata file ./datasets/datasetA/alignment.json and ./datasets/datasetB/alignment.json, then you can preprocess data by:

python preprocess.py json_meta "./datasets/datasetA/alignment.json,./datasets/datasetB/alignment.json" "./datasets/processed_A+B" --preset=(path to preset json file)

1-2. Preprocessing custom english datasets with long silence. (Based on vctk_preprocess)

Some dataset, especially automatically generated dataset may include long silence and undesirable leading/trailing noises, undermining the char-level seq2seq model. (e.g. VCTK, although this is covered in vctk_preprocess)

To deal with the problem, gentle_web_align.py will

  • Prepare phoneme alignments for all utterances
  • Cut silences during preprocessing

gentle_web_align.py uses Gentle, a kaldi based speech-text alignment tool. This accesses web-served Gentle application, aligns given sound segments with transcripts and converts the result to HTK-style label files, to be processed in preprocess.py. Gentle can be run in Linux/Mac/Windows(via Docker).

Preliminary results show that while HTK/festival/merlin-based method in vctk_preprocess/prepare_vctk_labels.py works better on VCTK, Gentle is more stable with audio clips with ambient noise. (e.g. movie excerpts)

Usage: (Assuming Gentle is running at localhost:8567 (Default when not specified))

  1. When sound file and transcript files are saved in separate folders. (e.g. sound files are at datasetA/wavs and transcripts are at datasetA/txts)
python gentle_web_align.py -w "datasetA/wavs/*.wav" -t "datasetA/txts/*.txt" --server_addr=localhost --port=8567
  1. When sound file and transcript files are saved in nested structure. (e.g. datasetB/speakerN/blahblah.wav and datasetB/speakerN/blahblah.txt)
python gentle_web_align.py --nested-directories="datasetB" --server_addr=localhost --port=8567

Once you have phoneme alignment for each utterance, you can extract features by running preprocess.py

2. Training

Usage:

python train.py --data-root=${data-root} --preset=<json> --hparams="parameters you may want to override"

Suppose you build a DeepVoice3-style model using LJSpeech dataset, then you can train your model by:

python train.py --preset=presets/deepvoice3_ljspeech.json --data-root=./data/ljspeech/

Model checkpoints (.pth) and alignments (.png) are saved in ./checkpoints directory per 10000 steps by default.

NIKL

Pleae check this in advance and follow the commands below.

python preprocess.py nikl_s ${your_nikl_root_path} data/nikl_s --preset=presets/deepvoice3_nikls.json

python train.py --data-root=./data/nikl_s --checkpoint-dir checkpoint_nikl_s --preset=presets/deepvoice3_nikls.json

4. Monitor with Tensorboard

Logs are dumped in ./log directory by default. You can monitor logs by tensorboard:

tensorboard --logdir=log

5. Synthesize from a checkpoint

Given a list of text, synthesis.py synthesize audio signals from trained model. Usage is:

python synthesis.py ${checkpoint_path} ${text_list.txt} ${output_dir} --preset=<json>

Example test_list.txt:

Generative adversarial network or variational auto-encoder.
Once upon a time there was a dear little girl who was loved by every one who looked at her, but most of all by her grandmother, and there was nothing that she would not have given to the child.
A text-to-speech synthesis system typically consists of multiple stages, such as a text analysis frontend, an acoustic model and an audio synthesis module.

Advanced usage

Multi-speaker model

VCTK and NIKL are supported dataset for building a multi-speaker model.

VCTK

Since some audio samples in VCTK have long silences that affect performance, it's recommended to do phoneme alignment and remove silences according to vctk_preprocess.

Once you have phoneme alignment for each utterance, you can extract features by:

python preprocess.py vctk ${your_vctk_root_path} ./data/vctk

Now that you have data prepared, then you can train a multi-speaker version of DeepVoice3 by:

python train.py --data-root=./data/vctk --checkpoint-dir=checkpoints_vctk \
   --preset=presets/deepvoice3_vctk.json \
   --log-event-path=log/deepvoice3_multispeaker_vctk_preset

If you want to reuse learned embedding from other dataset, then you can do this instead by:

python train.py --data-root=./data/vctk --checkpoint-dir=checkpoints_vctk \
   --preset=presets/deepvoice3_vctk.json \
   --log-event-path=log/deepvoice3_multispeaker_vctk_preset \
   --load-embedding=20171213_deepvoice3_checkpoint_step000210000.pth

This may improve training speed a bit.

NIKL

You will be able to obtain cleaned-up audio samples in ../nikl_preprocoess. Details are found in here.

Once NIKL corpus is ready to use from the preprocessing, you can extract features by:

python preprocess.py nikl_m ${your_nikl_root_path} data/nikl_m

Now that you have data prepared, then you can train a multi-speaker version of DeepVoice3 by:

python train.py --data-root=./data/nikl_m  --checkpoint-dir checkpoint_nikl_m \
   --preset=presets/deepvoice3_niklm.json

Speaker adaptation

If you have very limited data, then you can consider to try fine-turn pre-trained model. For example, using pre-trained model on LJSpeech, you can adapt it to data from VCTK speaker p225 (30 mins) by the following command:

python train.py --data-root=./data/vctk --checkpoint-dir=checkpoints_vctk_adaptation \
    --preset=presets/deepvoice3_ljspeech.json \
    --log-event-path=log/deepvoice3_vctk_adaptation \
    --restore-parts="20171213_deepvoice3_checkpoint_step000210000.pth"
    --speaker-id=0

From my experience, it can get reasonable speech quality very quickly rather than training the model from scratch.

There are two important options used above:

  • --restore-parts=<N>: It specifies where to load model parameters. The differences from the option --checkpoint=<N> are 1) --restore-parts=<N> ignores all invalid parameters, while --checkpoint=<N> doesn't. 2) --restore-parts=<N> tell trainer to start from 0-step, while --checkpoint=<N> tell trainer to continue from last step. --checkpoint=<N> should be ok if you are using exactly same model and continue to train, but it would be useful if you want to customize your model architecture and take advantages of pre-trained model.
  • --speaker-id=<N>: It specifies what speaker of data is used for training. This should only be specified if you are using multi-speaker dataset. As for VCTK, speaker id is automatically assigned incrementally (0, 1, ..., 107) according to the speaker_info.txt in the dataset.

If you are training multi-speaker model, speaker adaptation will only work when n_speakers is identical.

Trouble shooting

#5 RuntimeError: main thread is not in main loop

This may happen depending on backends you have for matplotlib. Try changing backend for matplotlib and see if it works as follows:

MPLBACKEND=Qt5Agg python train.py ${args...}

In #78, engiecat reported that changing the backend of matplotlib from Tkinter(TkAgg) to PyQt5(Qt5Agg) fixed the problem.

Sponsers

Acknowledgements

Part of code was adapted from the following projects:

Banner and logo created by @jraulhernandezi (#76)