encodec
State-of-the-art deep learning based audio codec supporting both mono 24 kHz audio and stereo 48 kHz audio.
Top Related Projects
Audiocraft is a library for audio processing and generation with deep learning. It features the state-of-the-art EnCodec audio compressor / tokenizer, along with MusicGen, a simple and controllable music generation LM with textual and melodic conditioning.
Code for the paper "Jukebox: A Generative Model for Music"
Clone a voice in 5 seconds to generate arbitrary speech in real-time
Quick Overview
EnCodec is an open-source neural audio codec developed by Facebook Research. It aims to provide high-quality audio compression using deep learning techniques, offering a balance between compression efficiency and audio quality. The project is designed to be flexible and adaptable for various audio processing tasks.
Pros
- High-quality audio compression using state-of-the-art neural network techniques
- Flexible architecture that can be adapted for different compression ratios and quality levels
- Open-source implementation, allowing for community contributions and improvements
- Supports both compression and decompression of audio files
Cons
- Requires significant computational resources for training and inference
- May not be as efficient as traditional codecs for low-bitrate scenarios
- Limited documentation and examples for advanced use cases
- Potential compatibility issues with existing audio processing pipelines
Code Examples
- Loading a pre-trained EnCodec model:
import torch
from encodec import EncodecModel
model = EncodecModel.encodec_model_24khz()
model.set_target_bandwidth(6.0)
- Compressing audio using EnCodec:
import torchaudio
# Load audio file
wav, sr = torchaudio.load("input.wav")
wav = wav.unsqueeze(0)
# Compress audio
with torch.no_grad():
compressed = model.encode(wav)
- Decompressing audio:
# Decompress audio
with torch.no_grad():
decoded = model.decode(compressed)
# Save decompressed audio
torchaudio.save("output.wav", decoded.squeeze(0), sr)
Getting Started
To get started with EnCodec, follow these steps:
- Install the required dependencies:
pip install torch torchaudio
pip install encodec
- Load a pre-trained model and set the target bandwidth:
from encodec import EncodecModel
model = EncodecModel.encodec_model_24khz()
model.set_target_bandwidth(6.0)
- Use the model to compress and decompress audio as shown in the code examples above.
Competitor Comparisons
Audiocraft is a library for audio processing and generation with deep learning. It features the state-of-the-art EnCodec audio compressor / tokenizer, along with MusicGen, a simple and controllable music generation LM with textual and melodic conditioning.
Pros of AudioCraft
- More comprehensive audio generation capabilities, including music and sound effects
- Includes multiple models (MusicGen, AudioGen, EnCodec) for various audio tasks
- Offers a user-friendly interface for audio generation and manipulation
Cons of AudioCraft
- Larger and more complex codebase, potentially harder to integrate or modify
- Requires more computational resources due to its broader scope
- May have a steeper learning curve for users unfamiliar with audio AI models
Code Comparison
EnCodec:
from encodec import EncodecModel
model = EncodecModel.encodec_model_24khz()
wav, sr = torchaudio.load("audio.wav")
encoded_frames = model.encode(wav)
AudioCraft:
from audiocraft.models import MusicGen
model = MusicGen.get_pretrained('medium')
wav = model.generate(
descriptions=['happy rock'],
duration=8,
)
Both repositories focus on audio processing, but AudioCraft offers a more comprehensive suite of tools for audio generation and manipulation. EnCodec is more specialized, focusing primarily on audio compression. AudioCraft's broader scope makes it more versatile but potentially more resource-intensive and complex to use.
Code for the paper "Jukebox: A Generative Model for Music"
Pros of Jukebox
- Generates complete musical compositions with vocals
- Offers more advanced and diverse musical output
- Supports multiple genres and styles of music
Cons of Jukebox
- Requires significant computational resources
- Longer processing time for generating music
- More complex to set up and use
Code Comparison
Jukebox
vqvae = VQVAE(input_shape, levels, downs_t, strides_t, emb_width, l_bins)
prior = SimplePrior(prior_input_shape, prior_bins, prior_width, prior_depth)
upsamplers = [CondResNet(conditioner_input_shape, res_input_shape, res_output_shape, res_blocks)]
EnCodec
model = EncodecModel.encodec_model_24khz()
wav, sr = torchaudio.load("audio.wav")
encoded_frames = model.encode(wav)
decoded_audio = model.decode(encoded_frames)
EnCodec focuses on efficient audio compression and encoding, while Jukebox is designed for generating complete musical compositions. EnCodec offers simpler implementation and faster processing, making it more suitable for real-time applications. Jukebox provides more advanced musical generation capabilities but requires more computational resources and setup complexity.
Clone a voice in 5 seconds to generate arbitrary speech in real-time
Pros of Real-Time-Voice-Cloning
- Focuses specifically on voice cloning, offering a more specialized solution
- Provides real-time capabilities, allowing for immediate voice synthesis
- Includes a user-friendly toolbox for voice cloning experiments
Cons of Real-Time-Voice-Cloning
- Less actively maintained, with fewer recent updates
- May have limited compatibility with newer Python versions and dependencies
- Lacks the broader audio compression capabilities of EnCodec
Code Comparison
Real-Time-Voice-Cloning:
def load_model(checkpoint_path):
model = SpeakerEncoder()
checkpoint = torch.load(checkpoint_path)
model.load_state_dict(checkpoint["model_state"])
return model
EnCodec:
def load_model(path):
model = EncodecModel.encodec_model_24khz()
state_dict = torch.load(path)
model.load_state_dict(state_dict)
return model
Both repositories provide methods for loading pre-trained models, but Real-Time-Voice-Cloning focuses on a speaker encoder model, while EnCodec loads a general audio codec model. EnCodec's approach is more versatile for various audio processing tasks, while Real-Time-Voice-Cloning is tailored specifically for voice cloning applications.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
EnCodec: High Fidelity Neural Audio Compression
This is the code for the EnCodec neural codec presented in the High Fidelity Neural Audio Compression [abs]. paper. We provide our two multi-bandwidth models:
- A causal model operating at 24 kHz on monophonic audio trained on a variety of audio data.
- A non-causal model operating at 48 kHz on stereophonic audio trained on music-only data.
The 24 kHz model can compress to 1.5, 3, 6, 12 or 24 kbps, while the 48 kHz model support 3, 6, 12 and 24 kbps. We also provide a pre-trained language model for each of the models, that can further compress the representation by up to 40% without any further loss of quality.
For reference, we also provide the code for our novel MS-STFT discriminator and the balancer.
Samples
Samples including baselines are provided on our sample page. You can also have a quick demo of what we achieve for 48 kHz music with EnCodec, along with entropy coding, by clicking the thumbnail (original tracks provided by Lucille Crew and Voyageur I).
ð¤ Transformers
Encodec has now been added to Transformers. For more information, please refer to Transformers' Encodec docs.
You can find both the 24KHz and 48KHz checkpoints on the ð¤ Hub.
Using ð¤ Transformers, you can leverage Encodec at scale along with all the other supported models and datasets. â¡ï¸ Alternatively you can also directly use the encodec package, as detailed in the Usage section.
To use first you'd need to set up your development environment!
pip install -U datasets
pip install git+https://github.com/huggingface/transformers.git@main
Then, start embedding your audio datasets at scale!
from datasets import load_dataset, Audio
from transformers import EncodecModel, AutoProcessor
# dummy dataset, however you can swap this with an dataset on the ð¤ hub or bring your own
librispeech_dummy = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
# load the model + processor (for pre-processing the audio)
model = EncodecModel.from_pretrained("facebook/encodec_24khz")
processor = AutoProcessor.from_pretrained("facebook/encodec_24khz")
# cast the audio data to the correct sampling rate for the model
librispeech_dummy = librispeech_dummy.cast_column("audio", Audio(sampling_rate=processor.sampling_rate))
audio_sample = librispeech_dummy[0]["audio"]["array"]
# pre-process the inputs
inputs = processor(raw_audio=audio_sample, sampling_rate=processor.sampling_rate, return_tensors="pt")
# explicitly encode then decode the audio inputs
encoder_outputs = model.encode(inputs["input_values"], inputs["padding_mask"])
audio_values = model.decode(encoder_outputs.audio_codes, encoder_outputs.audio_scales, inputs["padding_mask"])[0]
# or the equivalent with a forward pass
audio_values = model(inputs["input_values"], inputs["padding_mask"]).audio_values
# you can also extract the discrete codebook representation for LM tasks
# output: concatenated tensor of all the representations
audio_codes = model(inputs["input_values"], inputs["padding_mask"]).audio_codes
What's up?
See the changelog for details on releases.
Installation
EnCodec requires Python 3.8, and a reasonably recent version of PyTorch (1.11.0 ideally). To install EnCodec, you can run from this repository:
pip install -U encodec # stable release
pip install -U git+https://git@github.com/facebookresearch/encodec#egg=encodec # bleeding edge
# of if you cloned the repo locally
pip install .
Supported platforms: we officially support only Mac OS X (you might need XCode installed if running on a non Intel Mac), and recent versions of mainstream Linux distributions. We will try to help out on Windows but cannot provide strong support. Any other platform (iOS / Android / onboard ARM) are not supported.
Usage
You can then use the EnCodec command, either as
python3 -m encodec [...]
# or
encodec [...]
If you want to directly use the compression API, checkout encodec.compress
and encodec.model
. See hereafter for instructions on how to extract the discrete
representation.
Model storage
The models will be automatically downloaded on first use using Torch Hub. For more information on where those models are stored, or how to customize the storage location, checkout their documentation.
Compression
encodec [-b TARGET_BANDWIDTH] [-f] [--hq] [--lm] INPUT_FILE [OUTPUT_FILE]
Given any audio file supported by torchaudio on your platform, compresses
it with EnCodec to the target bandwidth (default is 6 kbps, can be either 1.5, 3, 6, 12 or 24).
OUTPUT_FILE must end in .ecdc
. If not provided it will be the same as INPUT_FILE
,
replacing the extension with .ecdc
.
In order to use the model operating at 48 kHz on stereophonic audio, use the --hq
flag.
The -f
flag is used to force overwrite an existing output file.
Use the --lm
flag to use the pretrained language model with entropy coding (expect it to
be much slower).
If the sample rate or number of channels of the input doesn't match that of the model, the command will automatically resample / reduce channels as needed.
Decompression
encodec [-f] [-r] ENCODEC_FILE [OUTPUT_WAV_FILE]
Given a .ecdc
file previously generated, this will decode it to the given output wav file.
If not provided, the output will default to the input with the .wav
extension.
Use the -f
file to force overwrite the output file (be carefull if compress then decompress,
not to overwrite your original file !). Use the -r
flag if you experience clipping, this will
rescale the output file to avoid it.
Compression + Decompression
encodec [-r] [-b TARGET_BANDWIDTH] [-f] [--hq] [--lm] INPUT_FILE OUTPUT_WAV_FILE
When OUTPUT_WAV_FILE
has the .wav
extension (as opposed to .ecdc
), the encodec
command will instead compress and immediately decompress without storing the intermediate
.ecdc
file.
Extracting discrete representations
The EnCodec model can also be used to extract discrete representations from the audio waveform.
from encodec import EncodecModel
from encodec.utils import convert_audio
import torchaudio
import torch
# Instantiate a pretrained EnCodec model
model = EncodecModel.encodec_model_24khz()
# The number of codebooks used will be determined bythe bandwidth selected.
# E.g. for a bandwidth of 6kbps, `n_q = 8` codebooks are used.
# Supported bandwidths are 1.5kbps (n_q = 2), 3 kbps (n_q = 4), 6 kbps (n_q = 8) and 12 kbps (n_q =16) and 24kbps (n_q=32).
# For the 48 kHz model, only 3, 6, 12, and 24 kbps are supported. The number
# of codebooks for each is half that of the 24 kHz model as the frame rate is twice as much.
model.set_target_bandwidth(6.0)
# Load and pre-process the audio waveform
wav, sr = torchaudio.load("<PATH_TO_AUDIO_FILE>")
wav = convert_audio(wav, sr, model.sample_rate, model.channels)
wav = wav.unsqueeze(0)
# Extract discrete codes from EnCodec
with torch.no_grad():
encoded_frames = model.encode(wav)
codes = torch.cat([encoded[0] for encoded in encoded_frames], dim=-1) # [B, n_q, T]
Note that the 48 kHz model processes the audio by chunks of 1 seconds, with an overlap of 1%,
and renormalizes the audio to have unit scale. For this model, the output of model.encode(wav)
would a list (for each frame of 1 second) of a tuple (codes, scale)
with scale
a scalar tensor.
Installation for development
This will install the dependencies and a encodec
in developer mode (changes to the files
will directly reflect), along with the dependencies to run unit tests.
pip install -e '.[dev]'
Test
You can run the unit tests with
make tests
FAQ
Please check this section before opening an issue.
Out of memory errors with long files
We do not try to be smart about long files, and we apply the model at once on the entire file. This can lead to a large memory usage and result in the process being killed. At the moment we will not support this use case.
Bad interactions between DistributedDataParallel and the RVQ code
We do not use DDP, instead we recommend using the routines in encodec/distrib.py
, in particular encodec.distrib.sync_buffer
and encodec.distrib.sync_grad
.
Citation
If you use this code or results in your paper, please cite our work as:
@article{defossez2022highfi,
title={High Fidelity Neural Audio Compression},
author={Défossez, Alexandre and Copet, Jade and Synnaeve, Gabriel and Adi, Yossi},
journal={arXiv preprint arXiv:2210.13438},
year={2022}
}
License
The code in this repository is released under the MIT license as found in the LICENSE file.
Top Related Projects
Audiocraft is a library for audio processing and generation with deep learning. It features the state-of-the-art EnCodec audio compressor / tokenizer, along with MusicGen, a simple and controllable music generation LM with textual and melodic conditioning.
Code for the paper "Jukebox: A Generative Model for Music"
Clone a voice in 5 seconds to generate arbitrary speech in real-time
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot