Convert Figma logo to code with AI

RVC-Project logoRetrieval-based-Voice-Conversion-WebUI

Easily train a good VC model with voice data <= 10 mins!

29,011
4,078
29,011
569

Top Related Projects

リアルタイムボイスチェンジャー Realtime Voice Changer

Clone a voice in 5 seconds to generate arbitrary speech in real-time

39,656

🐸💬 - a deep learning toolkit for Text-to-Speech, battle-tested in research and production

31,373

Facebook AI Research Sequence-to-Sequence Toolkit written in Python.

StyleTTS 2: Towards Human-Level Text-to-Speech through Style Diffusion and Adversarial Training with Large Speech Language Models

Quick Overview

RVC-Project/Retrieval-based-Voice-Conversion-WebUI is an open-source project that provides a web-based interface for voice conversion using retrieval-based methods. It allows users to convert one person's voice to another's while maintaining the original content and emotion. The project combines various AI technologies to achieve high-quality voice conversion with relatively low computational requirements.

Pros

  • User-friendly web interface for easy access and operation
  • Supports multiple languages and can handle various accents
  • Requires less training data compared to some other voice conversion methods
  • Offers real-time voice conversion capabilities

Cons

  • May require significant computational resources for optimal performance
  • The quality of voice conversion can vary depending on the input audio and target voice
  • Limited documentation for advanced customization and troubleshooting
  • Potential ethical concerns regarding voice cloning and misuse

Code Examples

# Example 1: Loading a pre-trained model
from infer.modules.vc.modules import VC
model = VC(config_path='path/to/config.json')
model.load_model('path/to/model.pth')
# Example 2: Performing voice conversion
input_audio = 'path/to/input.wav'
output_audio = 'path/to/output.wav'
converted_audio = model.convert(input_audio, target_speaker='Speaker1')
converted_audio.save(output_audio)
# Example 3: Real-time voice conversion
import sounddevice as sd

def callback(indata, outdata, frames, time, status):
    if status:
        print(status)
    converted = model.convert_realtime(indata)
    outdata[:] = converted

with sd.Stream(callback=callback, channels=1, samplerate=44100):
    sd.sleep(10000)  # Run for 10 seconds

Getting Started

  1. Clone the repository:

    git clone https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI.git
    cd Retrieval-based-Voice-Conversion-WebUI
    
  2. Install dependencies:

    pip install -r requirements.txt
    
  3. Run the web interface:

    python webui.py
    
  4. Open a web browser and navigate to http://localhost:7860 to access the interface.

Competitor Comparisons

リアルタイムボイスチェンジャー Realtime Voice Changer

Pros of voice-changer

  • Real-time voice conversion capabilities
  • Supports multiple voice conversion models (RVC, MMVCv13, MMVCv15, So-VITS-SVC)
  • Cross-platform compatibility (Windows, Mac, Linux)

Cons of voice-changer

  • Less focus on training custom voice models
  • May have higher system requirements for real-time processing
  • Potentially more complex setup for beginners

Code Comparison

voice-changer:

def _onnx_inference(self, wave):
    inputs = {self.input_name: wave}
    out = self.onnx_session.run(None, inputs)[0]
    return out

Retrieval-based-Voice-Conversion-WebUI:

def vc_single(
    sid,
    input_audio,
    f0_up_key,
    f0_file,
    f0_method,
    file_index,
    file_index2,
    # ... (additional parameters)
):
    # Function implementation

The code snippets show different approaches:

  • voice-changer focuses on ONNX inference for real-time processing
  • Retrieval-based-Voice-Conversion-WebUI has a more comprehensive function for voice conversion with various parameters

Both projects aim to provide voice conversion capabilities, but voice-changer emphasizes real-time performance and multiple model support, while Retrieval-based-Voice-Conversion-WebUI offers more customization options and focuses on training custom voice models.

Clone a voice in 5 seconds to generate arbitrary speech in real-time

Pros of Real-Time-Voice-Cloning

  • Focuses on real-time voice cloning, allowing for immediate results
  • Utilizes a pre-trained model, reducing the need for extensive training
  • Provides a more straightforward approach for quick voice cloning tasks

Cons of Real-Time-Voice-Cloning

  • Less customizable compared to Retrieval-based-Voice-Conversion-WebUI
  • May have lower audio quality in some cases due to real-time processing
  • Limited to English language support

Code Comparison

Real-Time-Voice-Cloning:

def load_model(weights_fpath):
    model = SpeakerEncoder()
    checkpoint = torch.load(weights_fpath)
    model.load_state_dict(checkpoint["model_state"])
    return model

Retrieval-based-Voice-Conversion-WebUI:

def get_vc(sid, to_return_protect0):
    global n_spk, tgt_sr, net_g, vc, cpt, version
    if sid == "" or sid == []:
        global hubert_model
        if hubert_model is not None:
            print("clean_empty_cache")
            del net_g, n_spk, vc, hubert_model, tgt_sr
            hubert_model = net_g = n_spk = vc = hubert_model = tgt_sr = None
            if torch.cuda.is_available():
                torch.cuda.empty_cache()
            if_f0 = cpt.get("f0", 1)
            version = cpt.get("version", "v1")
            return (
                {"visible": False, "__type__": "update"},
                {"visible": False, "__type__": "update"},
                {"visible": False, "__type__": "update"},
                "clean_empty_cache",
            )
39,656

🐸💬 - a deep learning toolkit for Text-to-Speech, battle-tested in research and production

Pros of TTS

  • More comprehensive text-to-speech solution with multiple models and languages
  • Better documentation and easier integration into existing projects
  • Active development with frequent updates and community support

Cons of TTS

  • Requires more computational resources for training and inference
  • Less focused on voice conversion, primarily a text-to-speech system
  • Steeper learning curve for customization and fine-tuning

Code Comparison

TTS:

from TTS.api import TTS

tts = TTS(model_name="tts_models/en/ljspeech/tacotron2-DDC")
tts.tts_to_file(text="Hello world!", file_path="output.wav")

Retrieval-based-Voice-Conversion-WebUI:

from infer_web import get_vc
from tools.infer_tools import infer_tool

vc = get_vc()
audio = infer_tool.infer(vc, "input.wav", "output.wav")

The code snippets demonstrate that TTS is more straightforward for text-to-speech tasks, while Retrieval-based-Voice-Conversion-WebUI is specifically designed for voice conversion. TTS offers a simpler API for generating speech from text, whereas Retrieval-based-Voice-Conversion-WebUI requires more setup and is tailored for converting one voice to another.

31,373

Facebook AI Research Sequence-to-Sequence Toolkit written in Python.

Pros of fairseq

  • Broader scope: fairseq is a comprehensive sequence modeling toolkit, supporting various tasks beyond voice conversion
  • Extensive documentation and examples: Provides detailed guides and tutorials for different use cases
  • Active development and community support: Regular updates and contributions from Facebook AI Research and the open-source community

Cons of fairseq

  • Steeper learning curve: Requires more technical expertise to set up and use effectively
  • Less specialized for voice conversion: May require additional configuration or fine-tuning for specific voice conversion tasks

Code Comparison

Retrieval-based-Voice-Conversion-WebUI:

import torch
from infer_pack.models import SynthesizerTrnMs256NSFsid, SynthesizerTrnMs256NSFsid_nono
from vc_infer_pipeline import VC

model = SynthesizerTrnMs256NSFsid(*args)
vc = VC(model)
audio = vc.pipeline(input_audio)

fairseq:

from fairseq.models.text_to_speech import TTSHubInterface

model = TTSHubInterface.from_pretrained("tts_transformer_lj")
wav, rate = model.predict("Hello world", voice="ljspeech")

StyleTTS 2: Towards Human-Level Text-to-Speech through Style Diffusion and Adversarial Training with Large Speech Language Models

Pros of StyleTTS2

  • More advanced text-to-speech capabilities, including style transfer and prosody control
  • Potentially higher quality voice synthesis with more natural-sounding results
  • Supports multi-speaker and multi-lingual voice conversion

Cons of StyleTTS2

  • May require more computational resources due to its advanced features
  • Potentially more complex to set up and use compared to Retrieval-based-Voice-Conversion-WebUI
  • Less focus on real-time voice conversion, which might be important for some use cases

Code Comparison

StyleTTS2:

style_vector = model.get_style_vector(ref_wav, ref_wav_lengths)
audio = model.infer(text, text_lengths, speakers, style_vector=style_vector)

Retrieval-based-Voice-Conversion-WebUI:

f0_up_key = int(tgt_sr / 16000 * 12)
audio = vc.pipeline(hubert_model, net_g, sid, audio, tgt_sr, f0_up_key)

Both projects offer voice conversion capabilities, but StyleTTS2 focuses more on text-to-speech with style transfer, while Retrieval-based-Voice-Conversion-WebUI emphasizes real-time voice conversion. StyleTTS2 provides more advanced features for controlling voice characteristics, while Retrieval-based-Voice-Conversion-WebUI may be simpler to use for basic voice conversion tasks.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

底模使用接近50小时的开源高质量VCTK训练集训练,无版权方面的顾虑,请大家放心使用

请期待RVCv3的底模,参数更大,数据更大,效果更好,基本持平的推理速度,需要训练数据量更少。

训练推理界面 实时变声界面
go-web.bat go-realtime-gui.bat
可以自由选择想要执行的操作。 我们已经实现端到端170ms延迟。如使用ASIO输入输出设备,已能实现端到端90ms延迟,但非常依赖硬件驱动支持。

简介

本仓库具有以下特点

  • 使用top1检索替换输入源特征为训练集特征来杜绝音色泄漏
  • 即便在相对较差的显卡上也能快速训练
  • 使用少量数据进行训练也能得到较好结果(推荐至少收集10分钟低底噪语音数据)
  • 可以通过模型融合来改变音色(借助ckpt处理选项卡中的ckpt-merge)
  • 简单易用的网页界面
  • 可调用UVR5模型来快速分离人声和伴奏
  • 使用最先进的人声音高提取算法InterSpeech2023-RMVPE根绝哑音问题。效果最好(显著地)但比crepe_full更快、资源占用更小
  • A卡I卡加速支持

点此查看我们的演示视频 !

环境配置

以下指令需在 Python 版本大于3.8的环境中执行。

Windows/Linux/MacOS等平台通用方法

下列方法任选其一。

1. 通过 pip 安装依赖

  1. 安装Pytorch及其核心依赖,若已安装则跳过。参考自: https://pytorch.org/get-started/locally/
pip install torch torchvision torchaudio
  1. 如果是 win 系统 + Nvidia Ampere 架构(RTX30xx),根据 #21 的经验,需要指定 pytorch 对应的 cuda 版本
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117
  1. 根据自己的显卡安装对应依赖
  • N卡
pip install -r requirements.txt
  • A卡/I卡
pip install -r requirements-dml.txt
  • A卡ROCM(Linux)
pip install -r requirements-amd.txt
  • I卡IPEX(Linux)
pip install -r requirements-ipex.txt

2. 通过 poetry 来安装依赖

安装 Poetry 依赖管理工具,若已安装则跳过。参考自: https://python-poetry.org/docs/#installation

curl -sSL https://install.python-poetry.org | python3 -

通过 Poetry 安装依赖时,python 建议使用 3.7-3.10 版本,其余版本在安装 llvmlite==0.39.0 时会出现冲突

poetry init -n
poetry env use "path to your python.exe"
poetry run pip install -r requirments.txt

MacOS

可以通过 run.sh 来安装依赖

sh ./run.sh

其他预模型准备

RVC需要其他一些预模型来推理和训练。

你可以从我们的Hugging Face space下载到这些模型。

1. 下载 assets

以下是一份清单,包括了所有RVC所需的预模型和其他文件的名称。你可以在tools文件夹找到下载它们的脚本。

  • ./assets/hubert/hubert_base.pt

  • ./assets/pretrained

  • ./assets/uvr5_weights

想使用v2版本模型的话,需要额外下载

  • ./assets/pretrained_v2

2. 安装 ffmpeg

若ffmpeg和ffprobe已安装则跳过。

Ubuntu/Debian 用户

sudo apt install ffmpeg

MacOS 用户

brew install ffmpeg

Windows 用户

下载后放置在根目录。

3. 下载 rmvpe 人声音高提取算法所需文件

如果你想使用最新的RMVPE人声音高提取算法,则你需要下载音高提取模型参数并放置于RVC根目录。

下载 rmvpe 的 dml 环境(可选, A卡/I卡用户)

4. AMD显卡Rocm(可选, 仅Linux)

如果你想基于AMD的Rocm技术在Linux系统上运行RVC,请先在这里安装所需的驱动。

若你使用的是Arch Linux,可以使用pacman来安装所需驱动:

pacman -S rocm-hip-sdk rocm-opencl-sdk

对于某些型号的显卡,你可能需要额外配置如下的环境变量(如:RX6700XT):

export ROCM_PATH=/opt/rocm
export HSA_OVERRIDE_GFX_VERSION=10.3.0

同时确保你的当前用户处于render与video用户组内:

sudo usermod -aG render $USERNAME
sudo usermod -aG video $USERNAME

开始使用

直接启动

使用以下指令来启动 WebUI

python infer-web.py

若先前使用 Poetry 安装依赖,则可以通过以下方式启动WebUI

poetry run python infer-web.py

使用整合包

下载并解压RVC-beta.7z

Windows 用户

双击go-web.bat

MacOS 用户

sh ./run.sh

对于需要使用IPEX技术的I卡用户(仅Linux)

source /opt/intel/oneapi/setvars.sh

参考项目

感谢所有贡献者作出的努力