Convert Figma logo to code with AI

rsxdalv logotts-generation-webui

TTS Generation Web UI (Bark, MusicGen + AudioGen, Tortoise, RVC, Vocos, Demucs, SeamlessM4T, MAGNet, StyleTTS2, MMS)

1,614
173
1,614
35

Top Related Projects

9,296

:robot: :speech_balloon: Deep learning for Text to Speech (Discussion forum: https://discourse.mozilla.org/c/tts)

35,243

🔊 Text-Prompted Generative Audio Model

A multi-voice TTS system trained with an emphasis on quality

StyleTTS 2: Towards Human-Level Text-to-Speech through Style Diffusion and Adversarial Training with Large Speech Language Models

30,331

Facebook AI Research Sequence-to-Sequence Toolkit written in Python.

Quick Overview

TTS Generation WebUI is a web-based interface for text-to-speech generation using various AI models. It provides a user-friendly platform for generating speech from text input, offering multiple voice options and customization features. The project aims to make advanced TTS technology accessible to users without requiring deep technical knowledge.

Pros

  • Easy-to-use web interface for text-to-speech generation
  • Supports multiple TTS models and voices
  • Offers customization options for speech output
  • Provides a convenient way to experiment with different TTS technologies

Cons

  • May require significant computational resources for some models
  • Limited to the specific models and voices implemented in the project
  • Potential for inconsistent results across different TTS engines
  • Requires setup and configuration, which might be challenging for non-technical users

Code Examples

# Example 1: Initializing the TTS engine
from tts_generation import TTSEngine

engine = TTSEngine(model="tacotron2", device="cuda")
# Example 2: Generating speech from text
text = "Hello, world! This is a text-to-speech example."
audio = engine.generate_speech(text)
# Example 3: Saving the generated audio to a file
engine.save_audio(audio, "output.wav")
# Example 4: Changing voice settings
engine.set_voice(speaker_id=1, language="en")
audio = engine.generate_speech("This is spoken in a different voice.")

Getting Started

To get started with TTS Generation WebUI:

  1. Clone the repository:

    git clone https://github.com/rsxdalv/tts-generation-webui.git
    
  2. Install dependencies:

    cd tts-generation-webui
    pip install -r requirements.txt
    
  3. Run the web interface:

    python app.py
    
  4. Open a web browser and navigate to http://localhost:7860 to access the TTS Generation WebUI.

Competitor Comparisons

9,296

:robot: :speech_balloon: Deep learning for Text to Speech (Discussion forum: https://discourse.mozilla.org/c/tts)

Pros of TTS

  • More comprehensive and feature-rich TTS library
  • Better documentation and community support
  • Offers a wider range of pre-trained models and voices

Cons of TTS

  • Steeper learning curve for beginners
  • Requires more computational resources
  • Less focus on user-friendly web interface

Code Comparison

TTS:

from TTS.api import TTS

tts = TTS("tts_models/en/ljspeech/tacotron2-DDC")
tts.tts_to_file(text="Hello world!", file_path="output.wav")

tts-generation-webui:

from TTS.api import TTS

model = TTS("tts_models/multilingual/multi-dataset/your_tts", gpu=True)
model.tts_to_file(text="Hello world!", speaker_wav="path/to/speaker.wav", language="en", file_path="output.wav")

Both repositories use the TTS library, but tts-generation-webui focuses on providing a web interface for easier use, while TTS offers more flexibility and control over the TTS process.

35,243

🔊 Text-Prompted Generative Audio Model

Pros of Bark

  • More advanced and versatile text-to-speech model with multilingual support
  • Capable of generating non-speech sounds and music
  • Actively maintained by a dedicated AI research company

Cons of Bark

  • Requires more computational resources and may be slower for generation
  • Less user-friendly interface, primarily designed for developers
  • Limited customization options for voice characteristics

Code Comparison

Bark:

from bark import SAMPLE_RATE, generate_audio, preload_models

preload_models()
text = "Hello, I'm a spoken audio clip generated using Bark."
audio_array = generate_audio(text)

TTS Generation WebUI:

import gradio as gr
from TTS.api import TTS

tts = TTS("tts_models/en/ljspeech/tacotron2-DDC")
gr.Interface(fn=tts.tts, inputs="text", outputs="audio").launch()

The Bark code snippet demonstrates its focus on generating audio programmatically, while TTS Generation WebUI emphasizes creating a user interface for text-to-speech conversion. Bark's approach is more flexible but requires more setup, whereas TTS Generation WebUI provides a simpler, more accessible interface for end-users.

A multi-voice TTS system trained with an emphasis on quality

Pros of Tortoise-TTS

  • More advanced and feature-rich TTS system with higher quality output
  • Supports multi-voice synthesis and voice cloning capabilities
  • Offers fine-grained control over various aspects of speech generation

Cons of Tortoise-TTS

  • Higher computational requirements and slower inference times
  • More complex setup and usage compared to TTS Generation WebUI
  • Limited web interface options out of the box

Code Comparison

Tortoise-TTS:

tts = TextToSpeech()
wav = tts.tts_with_preset("Hello world!", voice_samples=["path/to/sample.wav"], preset="fast")

TTS Generation WebUI:

from TTS.api import TTS
tts = TTS("tts_models/en/ljspeech/tacotron2-DDC")
tts.tts_to_file(text="Hello world!", file_path="output.wav")

The code snippets demonstrate that Tortoise-TTS offers more advanced features like voice cloning, while TTS Generation WebUI provides a simpler interface for basic TTS functionality.

StyleTTS 2: Towards Human-Level Text-to-Speech through Style Diffusion and Adversarial Training with Large Speech Language Models

Pros of StyleTTS2

  • Offers more advanced voice cloning capabilities
  • Provides better control over prosody and speaking style
  • Supports multi-speaker TTS with a single model

Cons of StyleTTS2

  • Requires more computational resources
  • Has a steeper learning curve for beginners
  • Less user-friendly interface compared to tts-generation-webui

Code Comparison

StyleTTS2:

mel = model.style_encoder(style_wav)
text = model.get_text(text, language)
audio = model.infer(text, mel, alpha=alpha, beta=beta, diffusion_steps=steps)

tts-generation-webui:

tts = TTS("tts_models/multilingual/multi-dataset/xtts_v2").to(device)
wav = tts.tts(text=text, speaker_wav=speaker_wav, language=language)

StyleTTS2 offers more granular control over the generation process, allowing for style encoding and diffusion steps adjustment. tts-generation-webui provides a simpler interface with fewer parameters, making it more accessible for quick text-to-speech tasks.

30,331

Facebook AI Research Sequence-to-Sequence Toolkit written in Python.

Pros of fairseq

  • Comprehensive toolkit for sequence modeling tasks
  • Supports a wide range of NLP and speech processing tasks
  • Highly scalable and optimized for performance

Cons of fairseq

  • Steeper learning curve due to its complexity
  • Requires more computational resources
  • Less focused on TTS-specific features

Code Comparison

fairseq:

from fairseq.models.wav2vec import Wav2VecModel

model = Wav2VecModel.from_pretrained('path/to/model')
wav_input_16khz = torch.randn(1,10000)
z = model.feature_extractor(wav_input_16khz)
c = model.feature_aggregator(z)

tts-generation-webui:

from TTS.api import TTS

tts = TTS("tts_models/multilingual/multi-dataset/xtts_v2")
tts.tts_to_file(text="Hello world!", file_path="output.wav")

The fairseq code demonstrates its flexibility for various audio processing tasks, while tts-generation-webui focuses on simplifying the TTS process with a user-friendly API. fairseq offers more control over the underlying model, whereas tts-generation-webui provides a streamlined approach for generating speech from text.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

TTS Generation WebUI

Download || Upgrading || Manual installation || Docker Setup || Configuration Guide || Discord Server || Open In Colab || Feedback / Bug reports

List of models: Bark, MusicGen + AudioGen, Tortoise, RVC, Vocos, Demucs, SeamlessM4T, MAGNeT, Stable Audio, Maha TTS, MMS, and more.

Note: Not all models support all platforms. For example, MusicGen and AudioGen are not supported on MacOS as of yet.

Videos

Bark TTS, Seamless Translation, RVC, Music Generation and MoreTTS Generation WebUI - A Tool for Text to Speech and Voice CloningText to speech and voice cloning - TTS Generation WebUI
Watch the videoWatch the videoWatch the video

Changelog

Aug 5:

  • Fix Bark in React UI, add Max Generation Duration.
  • Change AudioCraft Plus extension models directory to ./data/models/audiocraft_plus/
  • Improve model unloading for MusicGen and AudioGen. Add unload models button to MusicGen and AudioGen.
  • Add Huggingface Cache Manager extension.

Aug 4:

  • Add XTTS-RVC-UI extension, XTTS Fine-tuning demo extension.

Aug 3:

  • Add Riffusion extension, AudioCraft Mac extension, Bark Legacy extension.

Aug 2:

  • Add deprecation warning to old installer.
  • Unify error handling and simplify tab loading.

Aug 1:

  • Add "Attempt Update" button for external extensions.
  • Skip reinstalling packages when pip_packages version is not changed.
  • Synchronize Gradio Port with React UI.
  • Change default Gradio port to 7770 from 7860.

July 2024

Click to expand

July 31:

  • Fix React UI's MusicGen after the Gradio changes.
  • Add unload button to Whisper extension.

July 29:

  • Change FFMpeg to 4.4.2 from conda-forge in order to support more platforms, including Mac M1.
  • Disable tortoise CVVP.

July 26:

  • Whisper extension
  • Experimental AMD ROCM install support. (Linux only)

July 25:

  • Add diagnostic scripts for MacOS and Linux.
  • Add better error details for tabs.
  • Fix .sh script execution permissions for the installers on Linux and MacOS.

July 21:

  • Add Gallery History extension (adapted from the old gallery view)
  • Convert Simple Remixer to extension
  • Fix update.py to use the newer torch versions (update.py is only for legacy purposes and will likely break)
  • Add Diagnostic script and Force Reinstall scripts for Windows.

July 20:

  • Fix Discord join link
  • Simplify Bark further, removing excessive complexity in code.
  • Add UI/Modular extensions, these extensions allow installing new models and features to the UI. In the future, models will start as extensions before being added permamently.
  • Disable Gallery view in outputs
  • Known issue: Firefox fails at showing outputs in Gradio, it fails at fetching them from backend. Within React UI this works fine.

July 15:

  • Comment - As the React UI has been out for a long time now, Gradio UI is going to have the role of serving only the functions to the user, without the extremely complicated UI that it cannot handle. There is a real shortage of development time to add new models and features, but the old style of integration was not viable. As the new APIs and 'the role of the model' is defined, it will be possible to have extensions for entire models, enabling a lot more flexibility and lighter installations.
  • Start scaling back Gradio UI complexity - removed send to RVC/Demucs/Voice buttons. (Remove internal component Joutai).
  • Add version.json for better updates in the future.
  • Reduce Gradio Bark maximum number of outputs to 1.
  • Add unload model button to Tortoise, also unload the model before loading the next one/changing parameters, thus tortoise no longer uses 2x model memory during the settings change.

July 14:

  • Regroup Gradio tabs into groups - Text to Speech, Audio Conversion, Music Generation, Outputs and Settings
  • Clean up the header, add link for feedback
  • Add seed control to Stable Audio
  • Fix Stable Audio filename bug with newlines
  • Disable "Simple Remixer" Gradio tab
  • Fix bark voice clone & RVC once more
  • Add "Installed Packages" tab for debugging

July 13:

  • Major upgrade to Torch 2.3.1 and xformers 0.0.27
    • All users, including Mac and CPU will now have the same PyTorch version.
  • Upgrade CUDA to 11.8
  • Force python to be 3.10.11
  • Modify installer to allow upgrading Python and Torch without reinstalling (currently major version 2)
  • Fix magnet default params for better quality
  • Improve installer script checks to avoid bugs
  • Update StyleTTS2

July 11:

  • Improve Stable Audio generation filenames
  • Add force reinstall to torch repair
  • Make the installer auto-update before running

July 9:

July 8:

  • Change the installation process to reduce package clashes and enable torch version flexibility.

July 6:

  • Initial release of new mamba based installer.
  • Save Stable Audio results to outputs-rvc/StableAudio folder.
  • Add a disclaimer to Stable Audio model selection and show better error messages when files are missing.

July 1:

  • Optimize Stable Audio memory usage after generation.
  • Open React UI automatically only if gradio also opens automatically.
  • Remove unnecessary conda git reinstall.
  • Update to lastest Stable Audio which has mps support (requires newer torch versions).

June 2024

Click to expand June 22: * Add Stable Audio to Gradio.

June 21:

  • Add Vall-E-X demo to React UI.
  • Open React UI automatically in browser, fix the link again.
  • Add Split By Length to React/Tortoise.
  • Fix UVR5 demo folders.
  • Set fairseq version to 0.12.2 for Linux and Mac. (#323)
  • Improve generation history for all React UI tabs.

May 17:

  • Fix Tortoise presets in React UI.

May 9:

  • Add MMS to React UI.
  • Improve React UI and codebase.

May 4:

  • Group Changelog by month

April 2024

Click to expand Apr 28: * Add Maha TTS to React UI. * Add GPU Info to React UI.

Apr 6:

  • Add Vall-E-X generation demo tab.
  • Add MMS demo tab.
  • Add Maha TTS demo tab.
  • Add StyleTTS2 demo tab.

Apr 5:

  • Fix RVC installation bug.
  • Add basic UVR5 demo tab.

Apr 4:

  • Upgrade RVC to include RVMPE and FCPE. Remove the direct file input for models and indexes due to file duplication. Improve React UI interface for RVC.

March 2024

Click to expand

Mar 28:

  • Add GPU Info tab

Mar 27:

  • Add information about voice cloning to tab voice clone

Mar 26:

  • Add Maha TTS demo notebook

Mar 22:

  • Vall-E X demo via notebook (#292)
  • Add React UI to Docker image
  • Add install disclaimer

Mar 16:

  • Upgrade vocos to 0.1.0

Mar 14:

  • StyleTTS2 Demo Notebook

Mar 13:

  • Add Experimental Pipeline (Bark / Tortoise / MusicGen / AudioGen / MAGNeT -> RVC / Demucs / Vocos) (#287)
  • Fix RVC bug with model reloading on each generation. For short inputs that results in a visible speedup.

Mar 11:

  • Add Play as Audio and Save to Voices to bark (#286)
  • Change UX to show that files are deleted from favorites
  • Fix images for bark voices not showing
  • Fix audio playback in favorites

Mar 10:

  • Add Batching to React UI Magnet (#283)
  • Add audio to audio translation to SeamlessM4T (#284)

Mar 5:

Mar 3:

  • Add MMS demo as a notebook
  • Add MultiBandDiffusion high VRAM disclaimer

February 2024

Click to expand

Feb 21:

  • Fix Docker container builds and bug with Docker-Audiocraft

Feb 8:

Feb 6:

January 2024

Click to expand

Jan 21:

  • Add CPU/M1 torch auto-repair script with each update. To disable, edit check_cuda.py and change FORCE_NO_REPAIR = True

Jan 16:

  • Upgrade MusicGen, adding support for stereo and large melody models
  • Add MAGNeT

Jan 15:

  • Upgraded Gradio to 3.48.0
    • Several visual bugs have appeared, if they are critical, please report them or downgrade gradio.
    • Gradio: Suppress useless warnings
  • Supress Triton warnings
  • Gradio-Bark: Fix "Use last generation as history" behavior, empty selection no longer errors
  • Improve extensions loader display
  • Upgrade transformers to 4.36.1 from 4.31.0
  • Add SeamlessM4T Demo

Jan 14:

  • React UI: Fix missing directory errors

Jan 13:

  • React UI: Fix missing npm build step from automatic install

Jan 12:

  • React UI: Fix names for audio actions
  • Gradio: Fix multiple API warnings
  • Integration - React UI now is launched alongside Gradio, with a link to open it

Jan 11:

  • React UI: Make the build work without any errors

Jan 9:

  • React UI
    • Fix 404 handler for Wavesurfer
    • Group Bark tabs together

Jan 8:

  • Release React UI

2023

Click to expand

October 2023

Oct 26:

  • Improve model selection UX for Musicgen

Oct 24:

September 2023

Sep 21:

Sep 9:

Sep 5:

  • Add voice mixing to Bark
  • Add v1 Burn in prompt to Bark (Burn in prompts are for directing the semantic model without spending time on generating the audio. The v1 works by generating the semantic tokens and then using it as a prompt for the semantic model.)
  • Add generation length limiter to Bark

August 2023

Aug 27:

Aug 26:

  • Add Send to RVC, Demucs, Vocos buttons to Bark and Vocos

Aug 24:

Aug 21:

  • Add torchvision install to colab for musicgen issue fix
  • Remove rvc_tab file logging

Aug 20:

  • Fix MBD by reinstalling hydra-core at the end of an update

Aug 18:

  • CI: Add a GitHub Action to automatically publish docker image.

Aug 16:

  • Add "name" to tortoise generation parameters

Aug 15:

  • Pin torch to 2.0.0 in all requirements.txt files
  • Bump audiocraft and bark versions
  • Remove Tortoise transformers fix from colab
  • Update Tortoise to 2.8.0

Aug 13:

  • Potentially big fix for new user installs that had issues with GPU not being supported

Aug 11:

  • Tortoise hotfix thanks to manmay-nakhashi
  • Add Tortoise option to change tokenizer

Aug 8:

  • Update AudioCraft, improving MultiBandDiffusion performance
  • Fix Tortoise parameter 'cond_free' mismatch with 'ultra_fast' preset

Aug 7:

  • add tortoise deepspeed fix to colab

Aug 6:

  • Fix audiogen + mbd error, add tortoise fix for colab

Aug 4:

Aug 3:

Aug 2:

  • Fix Model locations not showing after restart

July 2023

July 26:

July 24:

  • Change bark file format to include history hash: ...continued_generation... -> ...from_3ea0d063...

July 23:

July 21:

July 19:

July 16:

July 10:

July 9:

July 5:

July 2:

July 1:

June 2023

Jun 29:

Jun 27:

Jun 20

Jun 19

June 18:

  • Update to newest audiocraft, add longer generations

Jun 14:

June 5:

  • Fix "Save to Favorites" button on bark generation page, clean up console (v4.1.1)
  • Add "Collections" tab for managing several different data sets and easier curration.

June 4:

  • Update to v4.1 - improved hash function, code improvements

June 3:

  • Update to v4 - new output structure, improved history view, codebase reorganization, improved metadata, output extensions support

May 2023

May 21:

  • Update to v3 - voice clone demo

May 17:

  • Update to v2 - generate results as they appear, preview long prompt generations piece by piece, enable up to 9 outputs, UI tweaks

May 16:

  • Add gradio settings tab, fix gradio errors in console, improve logging.
  • Update History and Favorites with "use as voice" and "save voice" buttons
  • Add voices tab
  • Bark tab: Remove "or Use last generation as history"
  • Improve code organization

May 13:

May 10:

  • Enable the possibility of reusing history prompts from older generations. Save generations as npz files. Add a convenient method of reusing any of the last 3 generations for the next prompts. Add a button for saving and collecting history prompts under /voices. https://github.com/rsxdalv/tts-generation-webui/pull/10

May 4:

May 3:

May 2:

  • Added support for history recylcing to continue longer prompts manually
  • Added support for v2 prompts

Before:

  • Added support for Tortoise TTS

Upgrading

In case of issues, feel free to contact the developers.

Upgrading from v6 to new installer

Recommended: Fresh install

  • Download the new version and run the start_tts_webui.bat (Windows) or start_tts_webui.sh (MacOS, Linux)
  • Once it is finished, close the server.
  • Recommended: Copy the old generations to the new directory, such as favorites/ outputs/ outputs-rvc/ models/ collections/ config.json
  • With caution: you can copy the whole new tts-generation-webui directory over the old one, but there might be some old files that are lost.

In-place upgrade, can delete some files, tweaks

  • Update the existing installation using the update_platform script
  • After the update run the new start_tts_webui.bat (Windows) or start_tts_webui.sh (MacOS, Linux) inside of the tts-generation-webui directory
  • Once the server starts, check if it works.
  • With caution: if the new server works, within the one-click-installers directory, delete the old installer_files.

Is there any more optimal way to do this?

Not exactly, the dependencies clash, especially between conda and python (and dependencies are already in a critical state, moving them to conda is ways off). Therefore, while it might be possible to just replace the old installer with the new one and running the update, the problems are unpredictable and unfixable. Making an update to installer requires a lot of testing so it's not done lightly.

New Installer

  • Download the repository as a zip file and extract it.
  • Run start_tts_webui.bat or start_tts_webui.sh to start the server. The server will be available at http://localhost:7860
  • Output log will be available in the installer_scripts/output.log file.

Manual installation (not recommended)

  • These instructions might not reflect all of the latest fixes and adjustments, but could be useful as a reference for debugging or understanding what the installer does. Hopefully they can be a basis for supporting new platforms, such as AMD/Intel.

  • Install conda (https://docs.conda.io/projects/conda/en/latest/user-guide/install/index.html)

  • Set up an environment: conda create -n venv python=3.10

  • Install git, node.js conda install -y -c conda-forge git nodejs conda

  • a) Either Continue with the installer script

    • activate the environment: conda activate venv and
    • (venv) node installer_scripts\init_app.js
    • then run the server with (venv) python server.py
  • b) Or install the requirements manually

    • Set up pytorch with CUDA or CPU (https://pytorch.org/audio/stable/build.windows.html#install-pytorch):
      • (venv) conda install pytorch torchvision torchaudio cpuonly ffmpeg -c pytorch for CPU/Mac
      • (venv) conda install -y -k pytorch[version=2,build=py3.10_cuda11.7*] torchvision torchaudio pytorch-cuda=11.7 cuda-toolkit ninja ffmpeg -c pytorch -c nvidia/label/cuda-11.7.0 -c nvidia for CUDA
    • Clone the repo: git clone https://github.com/rsxdalv/tts-generation-webui.git
    • Potentially (if errors occur in the next step) need to install build tools (without Visual Studio): https://visualstudio.microsoft.com/visual-cpp-build-tools/
    • Install the requirements:
      • activate the environment: conda activate venv and
      • install all the requirements*.txt (this list might not be up to date, check https://github.com/rsxdalv/tts-generation-webui/blob/main/Dockerfile#L39-L40):
        • (venv) pip install -r requirements.txt
        • (venv) pip install -r requirements_audiocraft.txt
        • (venv) pip install -r requirements_bark_hubert_quantizer.txt
        • (venv) pip install -r requirements_rvc.txt
        • (venv) pip install hydra-core==1.3.2
        • (venv) pip install -r requirements_styletts2.txt
        • (venv) pip install -r requirements_vall_e.txt
        • (venv) pip install -r requirements_maha_tts.txt
        • (venv) pip install -r requirements_stable_audio.txt
        • (venv) pip install soundfile==0.12.1
      • due to pip-torch incompatibilities torch will be reinstalled to 2.0.0, thus it might be necessary to reinstall it again after the requirements if you have a CPU/Mac or installed a specific torch version other than 2.0.0:
        • (venv) conda install pytorch torchvision torchaudio cpuonly ffmpeg -c pytorch for CPU/Mac
        • (venv) conda install -y -k pytorch[version=2,build=py3.10_cuda11.7*] torchvision torchaudio pytorch-cuda=11.7 cuda-toolkit ninja ffmpeg -c pytorch -c nvidia/label/cuda-11.7.0 -c nvidia for CUDA
      • build the react app: (venv) cd react-ui && npm install && npm run build
    • run the server: (venv) python server.py

React UI

  • Install nodejs (if not already installed with conda)
  • Install react dependencies: npm install
  • Build react: npm run build
  • Run react: npm start
  • Also run the python server: python server.py or with start_(platform) script

Docker Setup

tts-generation-webui can also be ran inside of a Docker container. To get started, pull the image from GitHub Container Registry:

docker pull ghcr.io/rsxdalv/tts-generation-webui:main

Once the image has been pulled it can be started with Docker Compose:

docker compose up -d

The container will take some time to generate the first output while models are downloaded in the background. The status of this download can be verified by checking the container logs:

docker logs tts-generation-webui

Building the image yourself

If you wish to build your own docker container, you can use the included Dockerfile:

docker build -t tts-generation-webui .

Please note that the docker-compose needs to be edited to use the image you just built.

Extra Voices for Bark, Prompt Samples

PromptEcho

https://rsxdalv.github.io/bark-speaker-directory/

Bark Readme

README_Bark.md

Info about managing models, caches and system space for AI projects

https://github.com/rsxdalv/tts-generation-webui/discussions/186#discussioncomment-7291274

Screenshots

reactmusicgenrvc

Examples

audio__bark__continued_generation__2024-05-04_16-07-49_long.webm

audio__bark__continued_generation__2024-05-04_16-09-21_long.webm

audio__bark__continued_generation__2024-05-04_16-10-55_long.webm

Open Source Libraries

This project utilizes the following open source libraries:

Ethical and Responsible Use

This technology is intended for enablement and creativity, not for harm.

By engaging with this AI model, you acknowledge and agree to abide by these guidelines, employing the AI model in a responsible, ethical, and legal manner.

  • Non-Malicious Intent: Do not use this AI model for malicious, harmful, or unlawful activities. It should only be used for lawful and ethical purposes that promote positive engagement, knowledge sharing, and constructive conversations.
  • No Impersonation: Do not use this AI model to impersonate or misrepresent yourself as someone else, including individuals, organizations, or entities. It should not be used to deceive, defraud, or manipulate others.
  • No Fraudulent Activities: This AI model must not be used for fraudulent purposes, such as financial scams, phishing attempts, or any form of deceitful practices aimed at acquiring sensitive information, monetary gain, or unauthorized access to systems.
  • Legal Compliance: Ensure that your use of this AI model complies with applicable laws, regulations, and policies regarding AI usage, data protection, privacy, intellectual property, and any other relevant legal obligations in your jurisdiction.
  • Acknowledgement: By engaging with this AI model, you acknowledge and agree to abide by these guidelines, using the AI model in a responsible, ethical, and legal manner.

License

Codebase and Dependencies

The codebase is licensed under MIT. However, it's important to note that when installing the dependencies, you will also be subject to their respective licenses. Although most of these licenses are permissive, there may be some that are not. Therefore, it's essential to understand that the permissive license only applies to the codebase itself, not the entire project.

That being said, the goal is to maintain MIT compatibility throughout the project. If you come across a dependency that is not compatible with the MIT license, please feel free to open an issue and bring it to our attention.

Known non-permissive dependencies:

LibraryLicenseNotes
encodecCC BY-NC 4.0Newer versions are MIT, but need to be installed manually
diffqCC BY-NC 4.0Optional in the future, not necessary to run, can be uninstalled, should be updated with demucs
lameencGPL LicenseFuture versions will make it LGPL, but need to be installed manually
unidecodeGPL LicenseNot mission critical, can be replaced with another library, issue: https://github.com/neonbjb/tortoise-tts/issues/494

Model Weights

Model weights have different licenses, please pay attention to the license of the model you are using.

Most notably:

  • Bark: MIT
  • Tortoise: Unknown (Apache-2.0 according to repo, but no license file in HuggingFace)
  • MusicGen: CC BY-NC 4.0
  • AudioGen: CC BY-NC 4.0

Compatibility / Errors

Audiocraft is currently only compatible with Linux and Windows. MacOS support still has not arrived, although it might be possible to install manually.

Torch being reinstalled

Due to the python package manager (pip) limitations, torch can get reinstalled several times. This is a wide ranging issue of pip and torch.

Red messages in console

These messages:

---- requires ----, but you have ---- which is incompatible.

Are completely normal. It's both a limitation of pip and because this Web UI combines a lot of different AI projects together. Since the projects are not always compatible with each other, they will complain about the other projects being installed. This is normal and expected. And in the end, despite the warnings/errors the projects will work together. It's not clear if this situation will ever be resolvable, but that is the hope.

Configuration Guide

You can configure the interface through the "Settings" tab or, for advanced users, via the config.json file in the root directory (not recommended). Below is a detailed explanation of each setting:

Model Configuration

ArgumentDefault ValueDescription
text_use_gputrueDetermines whether the GPU should be used for text processing.
text_use_smalltrueDetermines whether a "small" or reduced version of the text model should be used.
coarse_use_gputrueDetermines whether the GPU should be used for "coarse" processing.
coarse_use_smalltrueDetermines whether a "small" or reduced version of the "coarse" model should be used.
fine_use_gputrueDetermines whether the GPU should be used for "fine" processing.
fine_use_smalltrueDetermines whether a "small" or reduced version of the "fine" model should be used.
codec_use_gputrueDetermines whether the GPU should be used for codec processing.
load_models_on_startupfalseDetermines whether the models should be loaded during application startup.

Gradio Interface Options

ArgumentDefault ValueDescription
inlinefalseDisplay inline in an iframe.
inbrowsertrueAutomatically launch in a new tab.
sharefalseCreate a publicly shareable link.
debugfalseBlock the main thread from running.
enable_queuetrueServe inference requests through a queue.
max_threads40Maximum number of total threads.
authnullUsername and password required to access interface, format: username:password.
auth_messagenullHTML message provided on login page.
prevent_thread_lockfalseBlock the main thread while the server is running.
show_errorfalseDisplay errors in an alert modal.
server_name0.0.0.0Make app accessible on local network.
server_portnullStart Gradio app on this port.
show_tipsfalseShow tips about new Gradio features.
height500Height in pixels of the iframe element.
width100%Width in pixels of the iframe element.
favicon_pathnullPath to a file (.png, .gif, or .ico) to use as the favicon.
ssl_keyfilenullPath to a file to use as the private key file for a local server running on HTTPS.
ssl_certfilenullPath to a file to use as the signed certificate for HTTPS.
ssl_keyfile_passwordnullPassword to use with the SSL certificate for HTTPS.
ssl_verifytrueSkip certificate validation.
quiettrueSuppress most print statements.
show_apitrueShow the API docs in the footer of the app.
file_directoriesnullList of directories that Gradio is allowed to serve files from.
_frontendtrueFrontend.