Convert Figma logo to code with AI

m-bain logowhisperX

WhisperX: Automatic Speech Recognition with Word-level Timestamps (& Diarization)

11,304
1,183
11,304
486

Top Related Projects

67,223

Robust Speech Recognition via Large-Scale Weak Supervision

Port of OpenAI's Whisper model in C/C++

Faster Whisper transcription with CTranslate2

30,129

Facebook AI Research Sequence-to-Sequence Toolkit written in Python.

Silero Models: pre-trained speech-to-text, text-to-speech and text-enhancement models made embarrassingly simple

Quick Overview

WhisperX is an advanced speech recognition and transcription tool that extends OpenAI's Whisper model. It offers improved timestamp accuracy, speaker diarization, and faster transcription speeds. WhisperX aims to provide a more comprehensive and efficient solution for audio transcription tasks.

Pros

  • Enhanced timestamp accuracy for word-level alignment
  • Integrated speaker diarization for multi-speaker audio
  • Faster transcription speeds compared to the original Whisper model
  • Support for multiple languages and accents

Cons

  • Requires more computational resources due to additional features
  • May have occasional accuracy issues with heavily accented speech
  • Limited documentation for advanced customization
  • Dependency on external libraries and models

Code Examples

  1. Basic transcription:
import whisperx

model = whisperx.load_model("large-v2")
result = model.transcribe("audio.mp3")
print(result["text"])
  1. Transcription with speaker diarization:
import whisperx

model = whisperx.load_model("large-v2")
diarize_model = whisperx.DiarizationPipeline(use_auth_token="YOUR_HF_TOKEN")
result = model.transcribe("audio.mp3", diarize=True, diarize_model=diarize_model)
print(result["segments"])
  1. Transcription with language detection:
import whisperx

model = whisperx.load_model("large-v2")
result = model.transcribe("audio.mp3", language="auto")
print(result["language"], result["text"])

Getting Started

To get started with WhisperX, follow these steps:

  1. Install WhisperX:
pip install git+https://github.com/m-bain/whisperx.git
  1. Install additional dependencies:
pip install pyannote.audio
  1. Use WhisperX in your Python script:
import whisperx

model = whisperx.load_model("large-v2")
result = model.transcribe("path/to/your/audio.mp3")
print(result["text"])

Note: For speaker diarization, you'll need to obtain an authentication token from Hugging Face and use it when initializing the diarization pipeline.

Competitor Comparisons

67,223

Robust Speech Recognition via Large-Scale Weak Supervision

Pros of Whisper

  • Developed by OpenAI, a leading AI research company
  • Extensive documentation and community support
  • Broader language support with 99 languages

Cons of Whisper

  • Slower processing speed for long audio files
  • Less precise timestamp alignment for transcriptions
  • Limited fine-tuning options for specific use cases

Code Comparison

WhisperX:

from whisperx import load_model, transcribe

model = load_model("large-v2")
result = transcribe("audio.wav", model)

Whisper:

import whisper

model = whisper.load_model("large")
result = model.transcribe("audio.wav")

Key Differences

  • WhisperX focuses on improved speed and accuracy for long-form content
  • WhisperX offers better word-level timestamp alignment
  • Whisper provides a more general-purpose solution for various audio transcription tasks

Use Cases

  • WhisperX: Ideal for long-form content, podcasts, and applications requiring precise word-level timestamps
  • Whisper: Better suited for general transcription tasks and multilingual applications

Community and Support

  • Whisper: Larger community, more third-party integrations, and extensive documentation
  • WhisperX: Growing community, focused on specific improvements over the original Whisper model

Port of OpenAI's Whisper model in C/C++

Pros of whisper.cpp

  • Lightweight and efficient C++ implementation, suitable for resource-constrained environments
  • Faster execution speed, especially on CPU-only systems
  • Easier integration into existing C/C++ projects

Cons of whisper.cpp

  • Limited features compared to WhisperX, focusing primarily on transcription
  • Less accurate speaker diarization and word-level timestamps
  • May require more manual configuration for optimal performance

Code Comparison

WhisperX (Python):

import whisperx

model = whisperx.load_model("large-v2")
result = model.transcribe("audio.mp3")
print(result["text"])

whisper.cpp (C++):

#include "whisper.h"

whisper_context * ctx = whisper_init_from_file("ggml-large-v2.bin");
whisper_full_params params = whisper_full_default_params(WHISPER_SAMPLING_GREEDY);
whisper_full(ctx, params, "audio.wav", nullptr, nullptr);

Summary

WhisperX offers more advanced features like improved diarization and word-level timestamps, while whisper.cpp provides a lightweight, efficient implementation suitable for C/C++ projects and resource-constrained environments. WhisperX may be preferred for complex audio processing tasks, while whisper.cpp excels in speed and simplicity for basic transcription needs.

Faster Whisper transcription with CTranslate2

Pros of faster-whisper

  • Optimized for speed, offering faster transcription times
  • Supports streaming audio input for real-time transcription
  • Implements efficient CPU and GPU inference

Cons of faster-whisper

  • Limited to core Whisper functionality without additional features
  • May have slightly lower accuracy compared to WhisperX in some cases
  • Less focus on timestamp alignment and speaker diarization

Code Comparison

WhisperX:

model = WhisperModel("large-v2", device="cuda", compute_type="float16")
result = model.transcribe("audio.wav", batch_size=16)
segments = result["segments"]

faster-whisper:

model = WhisperModel("large-v2", device="cuda", compute_type="float16")
segments, info = model.transcribe("audio.wav", beam_size=5)
for segment in segments:
    print(segment.text)

Key Differences

WhisperX focuses on enhancing Whisper with additional features like improved timestamp alignment and speaker diarization, while faster-whisper prioritizes speed optimization and efficient inference. WhisperX may offer better accuracy and more advanced features, but faster-whisper excels in transcription speed and real-time processing capabilities.

Both projects build upon the original Whisper model, but cater to different use cases. WhisperX is more suitable for applications requiring precise timing and speaker identification, while faster-whisper is ideal for scenarios where speed and efficiency are paramount.

30,129

Facebook AI Research Sequence-to-Sequence Toolkit written in Python.

Pros of fairseq

  • Comprehensive toolkit for sequence modeling tasks
  • Supports a wide range of architectures and tasks
  • Highly customizable and extensible

Cons of fairseq

  • Steeper learning curve due to its complexity
  • May be overkill for simple speech recognition tasks
  • Requires more setup and configuration

Code Comparison

fairseq:

from fairseq.models.wav2vec import Wav2VecModel

model = Wav2VecModel.from_pretrained('path/to/model')
features = model.extract_features(waveform)

WhisperX:

import whisperx

model = whisperx.load_model("base")
result = model.transcribe("audio.mp3")

Key Differences

  • fairseq is a general-purpose sequence modeling toolkit, while WhisperX focuses specifically on speech recognition and diarization
  • WhisperX provides a simpler API for quick transcription tasks
  • fairseq offers more flexibility for advanced users and researchers
  • WhisperX includes built-in speaker diarization, which is not a core feature of fairseq

Use Cases

  • fairseq: Research, custom model development, and complex sequence modeling tasks
  • WhisperX: Rapid speech transcription, speaker diarization, and alignment in production environments

Silero Models: pre-trained speech-to-text, text-to-speech and text-enhancement models made embarrassingly simple

Pros of silero-models

  • Supports multiple languages and tasks (speech recognition, text-to-speech, voice activity detection)
  • Lightweight models suitable for edge devices and mobile applications
  • Extensive documentation and examples for various programming languages

Cons of silero-models

  • May have lower accuracy compared to WhisperX for speech recognition tasks
  • Lacks advanced features like word-level timestamps and speaker diarization
  • Smaller community and fewer updates compared to WhisperX

Code Comparison

silero-models:

import torch
import torchaudio
from silero import silero_stt

model, decoder, utils = torch.hub.load(repo_or_dir='snakers4/silero-models',
                                       model='silero_stt',
                                       language='en')

WhisperX:

import whisperx

device = "cuda"
audio_file = "audio.wav"
model = whisperx.load_model("large-v2", device)
result = model.transcribe(audio_file)

Both repositories offer speech recognition capabilities, but WhisperX focuses on extending OpenAI's Whisper model with additional features, while silero-models provides a broader range of speech-related tasks. WhisperX may be more suitable for high-accuracy transcription with advanced features, while silero-models is better for lightweight, multi-purpose speech processing applications.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

WhisperX

GitHub stars GitHub issues GitHub license ArXiv paper Twitter

whisperx-arch

This repository provides fast automatic speech recognition (70x realtime with large-v2) with word-level timestamps and speaker diarization.

  • ⚡️ Batched inference for 70x realtime transcription using whisper large-v2
  • 🪶 faster-whisper backend, requires <8GB gpu memory for large-v2 with beam_size=5
  • 🎯 Accurate word-level timestamps using wav2vec2 alignment
  • 👯‍♂️ Multispeaker ASR using speaker diarization from pyannote-audio (speaker ID labels)
  • 🗣️ VAD preprocessing, reduces hallucination & batching with no WER degradation

Whisper is an ASR model developed by OpenAI, trained on a large dataset of diverse audio. Whilst it does produces highly accurate transcriptions, the corresponding timestamps are at the utterance-level, not per word, and can be inaccurate by several seconds. OpenAI's whisper does not natively support batching.

Phoneme-Based ASR A suite of models finetuned to recognise the smallest unit of speech distinguishing one word from another, e.g. the element p in "tap". A popular example model is wav2vec2.0.

Forced Alignment refers to the process by which orthographic transcriptions are aligned to audio recordings to automatically generate phone level segmentation.

Voice Activity Detection (VAD) is the detection of the presence or absence of human speech.

Speaker Diarization is the process of partitioning an audio stream containing human speech into homogeneous segments according to the identity of each speaker.

New🚨

  • 1st place at Ego4d transcription challenge 🏆
  • WhisperX accepted at INTERSPEECH 2023
  • v3 transcript segment-per-sentence: using nltk sent_tokenize for better subtitlting & better diarization
  • v3 released, 70x speed-up open-sourced. Using batched whisper with faster-whisper backend!
  • v2 released, code cleanup, imports whisper library VAD filtering is now turned on by default, as in the paper.
  • Paper drop🎓👨‍🏫! Please see our ArxiV preprint for benchmarking and details of WhisperX. We also introduce more efficient batch inference resulting in large-v2 with *60-70x REAL TIME speed.

Setup ⚙️

Tested for PyTorch 2.0, Python 3.10 (use other versions at your own risk!)

GPU execution requires the NVIDIA libraries cuBLAS 11.x and cuDNN 8.x to be installed on the system. Please refer to the CTranslate2 documentation.

1. Create Python3.10 environment

conda create --name whisperx python=3.10

conda activate whisperx

2. Install PyTorch, e.g. for Linux and Windows CUDA11.8:

conda install pytorch==2.0.0 torchaudio==2.0.0 pytorch-cuda=11.8 -c pytorch -c nvidia

See other methods here.

3. Install this repo

pip install git+https://github.com/m-bain/whisperx.git

If already installed, update package to most recent commit

pip install git+https://github.com/m-bain/whisperx.git --upgrade

If wishing to modify this package, clone and install in editable mode:

$ git clone https://github.com/m-bain/whisperX.git
$ cd whisperX
$ pip install -e .

You may also need to install ffmpeg, rust etc. Follow openAI instructions here https://github.com/openai/whisper#setup.

Speaker Diarization

To enable Speaker Diarization, include your Hugging Face access token (read) that you can generate from Here after the --hf_token argument and accept the user agreement for the following models: Segmentation and Speaker-Diarization-3.1 (if you choose to use Speaker-Diarization 2.x, follow requirements here instead.)

Note
As of Oct 11, 2023, there is a known issue regarding slow performance with pyannote/Speaker-Diarization-3.0 in whisperX. It is due to dependency conflicts between faster-whisper and pyannote-audio 3.0.0. Please see this issue for more details and potential workarounds.

Usage 💬 (command line)

English

Run whisper on example segment (using default params, whisper small) add --highlight_words True to visualise word timings in the .srt file.

whisperx examples/sample01.wav

Result using WhisperX with forced alignment to wav2vec2.0 large:

https://user-images.githubusercontent.com/36994049/208253969-7e35fe2a-7541-434a-ae91-8e919540555d.mp4

Compare this to original whisper out the box, where many transcriptions are out of sync:

https://user-images.githubusercontent.com/36994049/207743923-b4f0d537-29ae-4be2-b404-bb941db73652.mov

For increased timestamp accuracy, at the cost of higher gpu mem, use bigger models (bigger alignment model not found to be that helpful, see paper) e.g.

whisperx examples/sample01.wav --model large-v2 --align_model WAV2VEC2_ASR_LARGE_LV60K_960H --batch_size 4

To label the transcript with speaker ID's (set number of speakers if known e.g. --min_speakers 2 --max_speakers 2):

whisperx examples/sample01.wav --model large-v2 --diarize --highlight_words True

To run on CPU instead of GPU (and for running on Mac OS X):

whisperx examples/sample01.wav --compute_type int8

Other languages

The phoneme ASR alignment model is language-specific, for tested languages these models are automatically picked from torchaudio pipelines or huggingface. Just pass in the --language code, and use the whisper --model large.

Currently default models provided for {en, fr, de, es, it, ja, zh, nl, uk, pt}. If the detected language is not in this list, you need to find a phoneme-based ASR model from huggingface model hub and test it on your data.

E.g. German

whisperx --model large-v2 --language de examples/sample_de_01.wav

https://user-images.githubusercontent.com/36994049/208298811-e36002ba-3698-4731-97d4-0aebd07e0eb3.mov

See more examples in other languages here.

Python usage 🐍

import whisperx
import gc 

device = "cuda" 
audio_file = "audio.mp3"
batch_size = 16 # reduce if low on GPU mem
compute_type = "float16" # change to "int8" if low on GPU mem (may reduce accuracy)

# 1. Transcribe with original whisper (batched)
model = whisperx.load_model("large-v2", device, compute_type=compute_type)

# save model to local path (optional)
# model_dir = "/path/"
# model = whisperx.load_model("large-v2", device, compute_type=compute_type, download_root=model_dir)

audio = whisperx.load_audio(audio_file)
result = model.transcribe(audio, batch_size=batch_size)
print(result["segments"]) # before alignment

# delete model if low on GPU resources
# import gc; gc.collect(); torch.cuda.empty_cache(); del model

# 2. Align whisper output
model_a, metadata = whisperx.load_align_model(language_code=result["language"], device=device)
result = whisperx.align(result["segments"], model_a, metadata, audio, device, return_char_alignments=False)

print(result["segments"]) # after alignment

# delete model if low on GPU resources
# import gc; gc.collect(); torch.cuda.empty_cache(); del model_a

# 3. Assign speaker labels
diarize_model = whisperx.DiarizationPipeline(use_auth_token=YOUR_HF_TOKEN, device=device)

# add min/max number of speakers if known
diarize_segments = diarize_model(audio)
# diarize_model(audio, min_speakers=min_speakers, max_speakers=max_speakers)

result = whisperx.assign_word_speakers(diarize_segments, result)
print(diarize_segments)
print(result["segments"]) # segments are now assigned speaker IDs

Demos 🚀

Replicate (large-v3 Replicate (large-v2 Replicate (medium)

If you don't have access to your own GPUs, use the links above to try out WhisperX.

Technical Details 👷‍♂️

For specific details on the batching and alignment, the effect of VAD, as well as the chosen alignment model, see the preprint paper.

To reduce GPU memory requirements, try any of the following (2. & 3. can affect quality):

  1. reduce batch size, e.g. --batch_size 4
  2. use a smaller ASR model --model base
  3. Use lighter compute type --compute_type int8

Transcription differences from openai's whisper:

  1. Transcription without timestamps. To enable single pass batching, whisper inference is performed --without_timestamps True, this ensures 1 forward pass per sample in the batch. However, this can cause discrepancies the default whisper output.
  2. VAD-based segment transcription, unlike the buffered transcription of openai's. In Wthe WhisperX paper we show this reduces WER, and enables accurate batched inference
  3. --condition_on_prev_text is set to False by default (reduces hallucination)

Limitations ⚠️

  • Transcript words which do not contain characters in the alignment models dictionary e.g. "2014." or "£13.60" cannot be aligned and therefore are not given a timing.
  • Overlapping speech is not handled particularly well by whisper nor whisperx
  • Diarization is far from perfect
  • Language specific wav2vec2 model is needed

Contribute 🧑‍🏫

If you are multilingual, a major way you can contribute to this project is to find phoneme models on huggingface (or train your own) and test them on speech for the target language. If the results look good send a pull request and some examples showing its success.

Bug finding and pull requests are also highly appreciated to keep this project going, since it's already diverging from the original research scope.

TODO 🗓

  • Multilingual init

  • Automatic align model selection based on language detection

  • Python usage

  • Incorporating speaker diarization

  • Model flush, for low gpu mem resources

  • Faster-whisper backend

  • Add max-line etc. see (openai's whisper utils.py)

  • Sentence-level segments (nltk toolbox)

  • Improve alignment logic

  • update examples with diarization and word highlighting

  • Subtitle .ass output <- bring this back (removed in v3)

  • Add benchmarking code (TEDLIUM for spd/WER & word segmentation)

  • Allow silero-vad as alternative VAD option

  • Improve diarization (word level). Harder than first thought...

Contact/Support 📇

Contact maxhbain@gmail.com for queries.

Buy Me A Coffee

Acknowledgements 🙏

This work, and my PhD, is supported by the VGG (Visual Geometry Group) and the University of Oxford.

Of course, this is builds on openAI's whisper. Borrows important alignment code from PyTorch tutorial on forced alignment And uses the wonderful pyannote VAD / Diarization https://github.com/pyannote/pyannote-audio

Valuable VAD & Diarization Models from [pyannote audio][https://github.com/pyannote/pyannote-audio]

Great backend from faster-whisper and CTranslate2

Those who have supported this work financially 🙏

Finally, thanks to the OS contributors of this project, keeping it going and identifying bugs.

Citation

If you use this in your research, please cite the paper:
@article{bain2022whisperx,
  title={WhisperX: Time-Accurate Speech Transcription of Long-Form Audio},
  author={Bain, Max and Huh, Jaesung and Han, Tengda and Zisserman, Andrew},
  journal={INTERSPEECH 2023},
  year={2023}
}