Convert Figma logo to code with AI

ggerganov logowhisper.cpp

Port of OpenAI's Whisper model in C/C++

34,303
3,480
34,303
706

Top Related Projects

67,223

Robust Speech Recognition via Large-Scale Weak Supervision

64,646

LLM inference in C/C++

30,129

Facebook AI Research Sequence-to-Sequence Toolkit written in Python.

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.

DeepSpeech is an open source embedded (offline, on-device) speech-to-text engine which can run in real time on devices ranging from a Raspberry Pi 4 to high power GPU servers.

Offline speech recognition API for Android, iOS, Raspberry Pi and servers with Python, Java, C# and Node

Quick Overview

Whisper.cpp is a high-performance C++ implementation of OpenAI's Whisper automatic speech recognition (ASR) model. It provides efficient CPU inference of Whisper, allowing for fast and accurate transcription of audio without requiring a GPU.

Pros

  • High performance on CPU, making it accessible for devices without GPUs
  • Low memory usage compared to the original Python implementation
  • Supports various Whisper model sizes (from tiny to large)
  • Cross-platform compatibility (Windows, macOS, Linux, iOS, Android)

Cons

  • Limited to inference only; no training capabilities
  • Fewer features compared to the original Python implementation
  • May require manual compilation on some platforms
  • Limited language support compared to the full Whisper model

Code Examples

  1. Basic transcription:
#include "whisper.h"

int main() {
    struct whisper_context * ctx = whisper_init_from_file("ggml-base.en.bin");
    whisper_full_params params = whisper_full_default_params(WHISPER_SAMPLING_GREEDY);
    
    whisper_full(ctx, params, pcm, pcm_len, "output.txt");
    whisper_free(ctx);
    return 0;
}
  1. Streaming transcription:
#include "whisper.h"

int main() {
    struct whisper_context * ctx = whisper_init_from_file("ggml-base.en.bin");
    whisper_full_params params = whisper_full_default_params(WHISPER_SAMPLING_GREEDY);
    params.print_realtime = true;
    params.print_progress = false;
    
    while (true) {
        // Capture audio and store in 'pcm'
        whisper_full(ctx, params, pcm, PCM_SIZE, nullptr);
    }
    whisper_free(ctx);
    return 0;
}
  1. Language detection:
#include "whisper.h"

int main() {
    struct whisper_context * ctx = whisper_init_from_file("ggml-base.bin");
    whisper_full_params params = whisper_full_default_params(WHISPER_SAMPLING_GREEDY);
    params.language = "auto";
    
    whisper_full(ctx, params, pcm, pcm_len, nullptr);
    const char* detected_language = whisper_lang_str(whisper_full_lang_id(ctx));
    printf("Detected language: %s\n", detected_language);
    
    whisper_free(ctx);
    return 0;
}

Getting Started

  1. Clone the repository:

    git clone https://github.com/ggerganov/whisper.cpp.git
    
  2. Build the project:

    cd whisper.cpp
    make
    
  3. Download a Whisper model (e.g., base.en):

    bash ./models/download-ggml-model.sh base.en
    
  4. Run the example:

    ./main -m models/ggml-base.en.bin -f samples/jfk.wav
    

Competitor Comparisons

67,223

Robust Speech Recognition via Large-Scale Weak Supervision

Pros of Whisper

  • Developed and maintained by OpenAI, ensuring high-quality and up-to-date models
  • Supports a wide range of languages and accents
  • Offers more advanced features like speaker diarization and timestamp generation

Cons of Whisper

  • Requires Python and significant computational resources
  • Slower inference speed compared to whisper.cpp
  • Larger model size, making it less suitable for edge devices or mobile applications

Code Comparison

whisper.cpp:

#include "whisper.h"

int main() {
    struct whisper_context * ctx = whisper_init_from_file("ggml-base.en.bin");
    whisper_full_default(ctx, params, pcmf32.data(), pcmf32.size());
    whisper_print_timings(ctx);
    whisper_free(ctx);
}

Whisper:

import whisper

model = whisper.load_model("base")
result = model.transcribe("audio.mp3")
print(result["text"])

Key Differences

  • whisper.cpp is implemented in C++, while Whisper is in Python
  • whisper.cpp focuses on efficiency and portability, suitable for embedded systems
  • Whisper offers more features and flexibility but requires more resources
  • whisper.cpp has faster inference times, especially on CPU-only systems
  • Whisper provides better out-of-the-box support for multiple languages and accents
64,646

LLM inference in C/C++

Pros of llama.cpp

  • Focuses on large language models, enabling local execution of LLMs
  • Supports quantization techniques for efficient model compression
  • Offers a wider range of model architectures and variants

Cons of llama.cpp

  • More complex setup and usage compared to whisper.cpp
  • Requires more computational resources for running large models
  • Less mature project with potentially more frequent breaking changes

Code Comparison

whisper.cpp:

// Load model
struct whisper_context * ctx = whisper_init_from_file("ggml-base.en.bin");

// Process audio
whisper_full_default(ctx, wparams, pcmf32.data(), pcmf32.size());

// Print result
const char * text = whisper_full_get_segment_text(ctx, 0);
printf("%s\n", text);

llama.cpp:

// Initialize model
llama_context * ctx = llama_init_from_file("ggml-model-q4_0.bin", params);

// Generate text
llama_eval(ctx, tokens.data(), tokens.size(), n_past, n_threads);

// Get output
float * logits = llama_get_logits(ctx);
int token = llama_sample_top_p_top_k(ctx, logits, top_k, top_p, temp, repeat_penalty);

Both repositories focus on efficient C++ implementations of AI models, but they serve different purposes. whisper.cpp is tailored for speech recognition, while llama.cpp is designed for running large language models locally. The code snippets illustrate the different APIs and usage patterns between the two projects.

30,129

Facebook AI Research Sequence-to-Sequence Toolkit written in Python.

Pros of fairseq

  • Broader scope: Supports a wide range of sequence-to-sequence tasks beyond speech recognition
  • More extensive documentation and examples
  • Larger community and more frequent updates

Cons of fairseq

  • Higher resource requirements and complexity
  • Steeper learning curve for beginners
  • Less optimized for specific speech recognition tasks

Code Comparison

fairseq:

from fairseq.models.wav2vec import Wav2VecModel

model = Wav2VecModel.from_pretrained('path/to/model')
wav_input_16khz = torch.randn(1,10000)
z = model.feature_extractor(wav_input_16khz)
c = model.feature_aggregator(z)

whisper.cpp:

#include "whisper.h"

whisper_context * ctx = whisper_init_from_file("path/to/model.bin");
whisper_full_params params = whisper_full_default_params(WHISPER_SAMPLING_GREEDY);
whisper_full(ctx, params, pcm, n_samples, "en");

The code snippets demonstrate the different approaches and languages used in each project. fairseq uses Python and PyTorch, while whisper.cpp is implemented in C++, focusing on efficiency and low-level control.

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.

Pros of transformers

  • Broader scope: Supports a wide range of NLP tasks and models beyond speech recognition
  • Extensive documentation and community support
  • Seamless integration with PyTorch and TensorFlow

Cons of transformers

  • Higher resource requirements and slower inference
  • More complex setup and usage for specific tasks like speech recognition
  • Larger codebase and dependencies

Code Comparison

whisper.cpp:

#include "whisper.h"

int main() {
    struct whisper_context * ctx = whisper_init_from_file("model.bin");
    whisper_full_default(ctx, params, pcm, pcm_len);
    whisper_free(ctx);
}

transformers:

from transformers import WhisperProcessor, WhisperForConditionalGeneration

processor = WhisperProcessor.from_pretrained("openai/whisper-base")
model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-base")
input_features = processor(audio, return_tensors="pt").input_features
predicted_ids = model.generate(input_features)

Summary

While transformers offers a versatile toolkit for various NLP tasks with extensive documentation, whisper.cpp focuses specifically on efficient speech recognition with lower resource requirements. The code comparison highlights the simplicity of whisper.cpp for quick implementation, whereas transformers provides a more flexible but verbose approach.

DeepSpeech is an open source embedded (offline, on-device) speech-to-text engine which can run in real time on devices ranging from a Raspberry Pi 4 to high power GPU servers.

Pros of DeepSpeech

  • Supports multiple languages and can be trained on custom datasets
  • Provides a complete end-to-end solution, including acoustic model and language model
  • Offers pre-trained models for immediate use

Cons of DeepSpeech

  • Requires more computational resources and has a larger model size
  • May have slower inference speed, especially on CPU-only systems
  • Less actively maintained, with the last major update in 2020

Code Comparison

DeepSpeech (Python):

import deepspeech
model = deepspeech.Model('path/to/model.pbmm')
text = model.stt(audio)

Whisper.cpp (C++):

#include "whisper.h"
whisper_context * ctx = whisper_init_from_file("path/to/model.bin");
whisper_full(ctx, params, pcm, pcm_len, nullptr);

Both repositories provide speech recognition capabilities, but they differ in their approach and implementation. Whisper.cpp is a lightweight, C++ port of OpenAI's Whisper model, focusing on efficiency and ease of use. It's actively maintained and optimized for various platforms, including mobile and edge devices.

DeepSpeech, on the other hand, is a more comprehensive solution that allows for custom training and supports multiple languages. However, it may require more resources and has seen less recent development compared to Whisper.cpp.

The choice between the two depends on specific project requirements, such as language support, computational resources, and the need for customization.

Offline speech recognition API for Android, iOS, Raspberry Pi and servers with Python, Java, C# and Node

Pros of Vosk-api

  • Supports multiple languages and acoustic models out of the box
  • Designed for real-time speech recognition with streaming capabilities
  • Offers bindings for various programming languages (Python, Java, Node.js, etc.)

Cons of Vosk-api

  • Generally larger model sizes compared to Whisper.cpp
  • May require more computational resources for real-time processing
  • Less focus on transcription accuracy for challenging audio conditions

Code Comparison

Vosk-api (Python):

from vosk import Model, KaldiRecognizer
import pyaudio

model = Model("model")
rec = KaldiRecognizer(model, 16000)

p = pyaudio.PyAudio()
stream = p.open(format=pyaudio.paInt16, channels=1, rate=16000, input=True, frames_per_buffer=8000)
stream.start_stream()

while True:
    data = stream.read(4000)
    if rec.AcceptWaveform(data):
        print(rec.Result())

Whisper.cpp (C++):

#include "whisper.h"

whisper_context * ctx = whisper_init_from_file("ggml-base.en.bin");
whisper_full_params params = whisper_full_default_params(WHISPER_SAMPLING_GREEDY);

whisper_full(ctx, params, pcmf32.data(), pcmf32.size());

const int n_segments = whisper_full_n_segments(ctx);
for (int i = 0; i < n_segments; ++i) {
    const char * text = whisper_full_get_segment_text(ctx, i);
    printf("%s", text);
}

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

whisper.cpp

whisper.cpp

Actions Status License: MIT Conan Center npm

Stable: v1.6.2 / Roadmap | F.A.Q.

High-performance inference of OpenAI's Whisper automatic speech recognition (ASR) model:

Supported platforms:

The entire high-level implementation of the model is contained in whisper.h and whisper.cpp. The rest of the code is part of the ggml machine learning library.

Having such a lightweight implementation of the model allows to easily integrate it in different platforms and applications. As an example, here is a video of running the model on an iPhone 13 device - fully offline, on-device: whisper.objc

https://user-images.githubusercontent.com/1991296/197385372-962a6dea-bca1-4d50-bf96-1d8c27b98c81.mp4

You can also easily make your own offline voice assistant application: command

https://user-images.githubusercontent.com/1991296/204038393-2f846eae-c255-4099-a76d-5735c25c49da.mp4

On Apple Silicon, the inference runs fully on the GPU via Metal:

https://github.com/ggerganov/whisper.cpp/assets/1991296/c82e8f86-60dc-49f2-b048-d2fdbd6b5225

Or you can even run it straight in the browser: talk.wasm

Implementation details

  • The core tensor operations are implemented in C (ggml.h / ggml.c)
  • The transformer model and the high-level C-style API are implemented in C++ (whisper.h / whisper.cpp)
  • Sample usage is demonstrated in main.cpp
  • Sample real-time audio transcription from the microphone is demonstrated in stream.cpp
  • Various other examples are available in the examples folder

The tensor operators are optimized heavily for Apple silicon CPUs. Depending on the computation size, Arm Neon SIMD intrinsics or CBLAS Accelerate framework routines are used. The latter are especially effective for bigger sizes since the Accelerate framework utilizes the special-purpose AMX coprocessor available in modern Apple products.

Quick start

First clone the repository:

git clone https://github.com/ggerganov/whisper.cpp.git

Then, download one of the Whisper models converted in ggml format. For example:

bash ./models/download-ggml-model.sh base.en

Now build the main example and transcribe an audio file like this:

# build the main example
make

# transcribe an audio file
./main -f samples/jfk.wav

For a quick demo, simply run make base.en:

$ make base.en

cc  -I.              -O3 -std=c11   -pthread -DGGML_USE_ACCELERATE   -c ggml.c -o ggml.o
c++ -I. -I./examples -O3 -std=c++11 -pthread -c whisper.cpp -o whisper.o
c++ -I. -I./examples -O3 -std=c++11 -pthread examples/main/main.cpp whisper.o ggml.o -o main  -framework Accelerate
./main -h

usage: ./main [options] file0.wav file1.wav ...

options:
  -h,        --help              [default] show this help message and exit
  -t N,      --threads N         [4      ] number of threads to use during computation
  -p N,      --processors N      [1      ] number of processors to use during computation
  -ot N,     --offset-t N        [0      ] time offset in milliseconds
  -on N,     --offset-n N        [0      ] segment index offset
  -d  N,     --duration N        [0      ] duration of audio to process in milliseconds
  -mc N,     --max-context N     [-1     ] maximum number of text context tokens to store
  -ml N,     --max-len N         [0      ] maximum segment length in characters
  -sow,      --split-on-word     [false  ] split on word rather than on token
  -bo N,     --best-of N         [5      ] number of best candidates to keep
  -bs N,     --beam-size N       [5      ] beam size for beam search
  -wt N,     --word-thold N      [0.01   ] word timestamp probability threshold
  -et N,     --entropy-thold N   [2.40   ] entropy threshold for decoder fail
  -lpt N,    --logprob-thold N   [-1.00  ] log probability threshold for decoder fail
  -debug,    --debug-mode        [false  ] enable debug mode (eg. dump log_mel)
  -tr,       --translate         [false  ] translate from source language to english
  -di,       --diarize           [false  ] stereo audio diarization
  -tdrz,     --tinydiarize       [false  ] enable tinydiarize (requires a tdrz model)
  -nf,       --no-fallback       [false  ] do not use temperature fallback while decoding
  -otxt,     --output-txt        [false  ] output result in a text file
  -ovtt,     --output-vtt        [false  ] output result in a vtt file
  -osrt,     --output-srt        [false  ] output result in a srt file
  -olrc,     --output-lrc        [false  ] output result in a lrc file
  -owts,     --output-words      [false  ] output script for generating karaoke video
  -fp,       --font-path         [/System/Library/Fonts/Supplemental/Courier New Bold.ttf] path to a monospace font for karaoke video
  -ocsv,     --output-csv        [false  ] output result in a CSV file
  -oj,       --output-json       [false  ] output result in a JSON file
  -ojf,      --output-json-full  [false  ] include more information in the JSON file
  -of FNAME, --output-file FNAME [       ] output file path (without file extension)
  -ps,       --print-special     [false  ] print special tokens
  -pc,       --print-colors      [false  ] print colors
  -pp,       --print-progress    [false  ] print progress
  -nt,       --no-timestamps     [false  ] do not print timestamps
  -l LANG,   --language LANG     [en     ] spoken language ('auto' for auto-detect)
  -dl,       --detect-language   [false  ] exit after automatically detecting language
             --prompt PROMPT     [       ] initial prompt
  -m FNAME,  --model FNAME       [models/ggml-base.en.bin] model path
  -f FNAME,  --file FNAME        [       ] input WAV file path
  -oved D,   --ov-e-device DNAME [CPU    ] the OpenVINO device used for encode inference
  -ls,       --log-score         [false  ] log best decoder scores of tokens
  -ng,       --no-gpu            [false  ] disable GPU


bash ./models/download-ggml-model.sh base.en
Downloading ggml model base.en ...
ggml-base.en.bin               100%[========================>] 141.11M  6.34MB/s    in 24s
Done! Model 'base.en' saved in 'models/ggml-base.en.bin'
You can now use it like this:

  $ ./main -m models/ggml-base.en.bin -f samples/jfk.wav


===============================================
Running base.en on all samples in ./samples ...
===============================================

----------------------------------------------
[+] Running base.en on samples/jfk.wav ... (run 'ffplay samples/jfk.wav' to listen)
----------------------------------------------

whisper_init_from_file: loading model from 'models/ggml-base.en.bin'
whisper_model_load: loading model
whisper_model_load: n_vocab       = 51864
whisper_model_load: n_audio_ctx   = 1500
whisper_model_load: n_audio_state = 512
whisper_model_load: n_audio_head  = 8
whisper_model_load: n_audio_layer = 6
whisper_model_load: n_text_ctx    = 448
whisper_model_load: n_text_state  = 512
whisper_model_load: n_text_head   = 8
whisper_model_load: n_text_layer  = 6
whisper_model_load: n_mels        = 80
whisper_model_load: f16           = 1
whisper_model_load: type          = 2
whisper_model_load: mem required  =  215.00 MB (+    6.00 MB per decoder)
whisper_model_load: kv self size  =    5.25 MB
whisper_model_load: kv cross size =   17.58 MB
whisper_model_load: adding 1607 extra tokens
whisper_model_load: model ctx     =  140.60 MB
whisper_model_load: model size    =  140.54 MB

system_info: n_threads = 4 / 10 | AVX = 0 | AVX2 = 0 | AVX512 = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | VSX = 0 |

main: processing 'samples/jfk.wav' (176000 samples, 11.0 sec), 4 threads, 1 processors, lang = en, task = transcribe, timestamps = 1 ...


[00:00:00.000 --> 00:00:11.000]   And so my fellow Americans, ask not what your country can do for you, ask what you can do for your country.


whisper_print_timings:     fallbacks =   0 p /   0 h
whisper_print_timings:     load time =   113.81 ms
whisper_print_timings:      mel time =    15.40 ms
whisper_print_timings:   sample time =    11.58 ms /    27 runs (    0.43 ms per run)
whisper_print_timings:   encode time =   266.60 ms /     1 runs (  266.60 ms per run)
whisper_print_timings:   decode time =    66.11 ms /    27 runs (    2.45 ms per run)
whisper_print_timings:    total time =   476.31 ms

The command downloads the base.en model converted to custom ggml format and runs the inference on all .wav samples in the folder samples.

For detailed usage instructions, run: ./main -h

Note that the main example currently runs only with 16-bit WAV files, so make sure to convert your input before running the tool. For example, you can use ffmpeg like this:

ffmpeg -i input.mp3 -ar 16000 -ac 1 -c:a pcm_s16le output.wav

More audio samples

If you want some extra audio samples to play with, simply run:

make samples

This will download a few more audio files from Wikipedia and convert them to 16-bit WAV format via ffmpeg.

You can download and run the other models as follows:

make tiny.en
make tiny
make base.en
make base
make small.en
make small
make medium.en
make medium
make large-v1
make large-v2
make large-v3

Memory usage

ModelDiskMem
tiny75 MiB~273 MB
base142 MiB~388 MB
small466 MiB~852 MB
medium1.5 GiB~2.1 GB
large2.9 GiB~3.9 GB

Quantization

whisper.cpp supports integer quantization of the Whisper ggml models. Quantized models require less memory and disk space and depending on the hardware can be processed more efficiently.

Here are the steps for creating and using a quantized model:

# quantize a model with Q5_0 method
make quantize
./quantize models/ggml-base.en.bin models/ggml-base.en-q5_0.bin q5_0

# run the examples as usual, specifying the quantized model file
./main -m models/ggml-base.en-q5_0.bin ./samples/gb0.wav

Core ML support

On Apple Silicon devices, the Encoder inference can be executed on the Apple Neural Engine (ANE) via Core ML. This can result in significant speed-up - more than x3 faster compared with CPU-only execution. Here are the instructions for generating a Core ML model and using it with whisper.cpp:

  • Install Python dependencies needed for the creation of the Core ML model:

    pip install ane_transformers
    pip install openai-whisper
    pip install coremltools
    
    • To ensure coremltools operates correctly, please confirm that Xcode is installed and execute xcode-select --install to install the command-line tools.
    • Python 3.10 is recommended.
    • MacOS Sonoma (version 14) or newer is recommended, as older versions of MacOS might experience issues with transcription hallucination.
    • [OPTIONAL] It is recommended to utilize a Python version management system, such as Miniconda for this step:
      • To create an environment, use: conda create -n py310-whisper python=3.10 -y
      • To activate the environment, use: conda activate py310-whisper
  • Generate a Core ML model. For example, to generate a base.en model, use:

    ./models/generate-coreml-model.sh base.en
    

    This will generate the folder models/ggml-base.en-encoder.mlmodelc

  • Build whisper.cpp with Core ML support:

    # using Makefile
    make clean
    WHISPER_COREML=1 make -j
    
    # using CMake
    cmake -B build -DWHISPER_COREML=1
    cmake --build build -j --config Release
    
  • Run the examples as usual. For example:

    $ ./main -m models/ggml-base.en.bin -f samples/jfk.wav
    
    ...
    
    whisper_init_state: loading Core ML model from 'models/ggml-base.en-encoder.mlmodelc'
    whisper_init_state: first run on a device may take a while ...
    whisper_init_state: Core ML model loaded
    
    system_info: n_threads = 4 / 10 | AVX = 0 | AVX2 = 0 | AVX512 = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | VSX = 0 | COREML = 1 |
    
    ...
    

    The first run on a device is slow, since the ANE service compiles the Core ML model to some device-specific format. Next runs are faster.

For more information about the Core ML implementation please refer to PR #566.

OpenVINO support

On platforms that support OpenVINO, the Encoder inference can be executed on OpenVINO-supported devices including x86 CPUs and Intel GPUs (integrated & discrete).

This can result in significant speedup in encoder performance. Here are the instructions for generating the OpenVINO model and using it with whisper.cpp:

  • First, setup python virtual env. and install python dependencies. Python 3.10 is recommended.

    Windows:

    cd models
    python -m venv openvino_conv_env
    openvino_conv_env\Scripts\activate
    python -m pip install --upgrade pip
    pip install -r requirements-openvino.txt
    

    Linux and macOS:

    cd models
    python3 -m venv openvino_conv_env
    source openvino_conv_env/bin/activate
    python -m pip install --upgrade pip
    pip install -r requirements-openvino.txt
    
  • Generate an OpenVINO encoder model. For example, to generate a base.en model, use:

    python convert-whisper-to-openvino.py --model base.en
    

    This will produce ggml-base.en-encoder-openvino.xml/.bin IR model files. It's recommended to relocate these to the same folder as ggml models, as that is the default location that the OpenVINO extension will search at runtime.

  • Build whisper.cpp with OpenVINO support:

    Download OpenVINO package from release page. The recommended version to use is 2023.0.0.

    After downloading & extracting package onto your development system, set up required environment by sourcing setupvars script. For example:

    Linux:

    source /path/to/l_openvino_toolkit_ubuntu22_2023.0.0.10926.b4452d56304_x86_64/setupvars.sh
    

    Windows (cmd):

    C:\Path\To\w_openvino_toolkit_windows_2023.0.0.10926.b4452d56304_x86_64\setupvars.bat
    

    And then build the project using cmake:

    cmake -B build -DWHISPER_OPENVINO=1
    cmake --build build -j --config Release
    
  • Run the examples as usual. For example:

    $ ./main -m models/ggml-base.en.bin -f samples/jfk.wav
    
    ...
    
    whisper_ctx_init_openvino_encoder: loading OpenVINO model from 'models/ggml-base.en-encoder-openvino.xml'
    whisper_ctx_init_openvino_encoder: first run on a device may take a while ...
    whisper_openvino_init: path_model = models/ggml-base.en-encoder-openvino.xml, device = GPU, cache_dir = models/ggml-base.en-encoder-openvino-cache
    whisper_ctx_init_openvino_encoder: OpenVINO model loaded
    
    system_info: n_threads = 4 / 8 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | VSX = 0 | COREML = 0 | OPENVINO = 1 |
    
    ...
    

    The first time run on an OpenVINO device is slow, since the OpenVINO framework will compile the IR (Intermediate Representation) model to a device-specific 'blob'. This device-specific blob will get cached for the next run.

For more information about the Core ML implementation please refer to PR #1037.

NVIDIA GPU support

With NVIDIA cards the processing of the models is done efficiently on the GPU via cuBLAS and custom CUDA kernels. First, make sure you have installed cuda: https://developer.nvidia.com/cuda-downloads

Now build whisper.cpp with CUDA support:

make clean
GGML_CUDA=1 make -j

BLAS CPU support via OpenBLAS

Encoder processing can be accelerated on the CPU via OpenBLAS. First, make sure you have installed openblas: https://www.openblas.net/

Now build whisper.cpp with OpenBLAS support:

make clean
GGML_OPENBLAS=1 make -j

BLAS CPU support via Intel MKL

Encoder processing can be accelerated on the CPU via the BLAS compatible interface of Intel's Math Kernel Library. First, make sure you have installed Intel's MKL runtime and development packages: https://www.intel.com/content/www/us/en/developer/tools/oneapi/onemkl-download.html

Now build whisper.cpp with Intel MKL BLAS support:

source /opt/intel/oneapi/setvars.sh
mkdir build
cd build
cmake -DWHISPER_MKL=ON ..
WHISPER_MKL=1 make -j

Docker

Prerequisites

  • Docker must be installed and running on your system.
  • Create a folder to store big models & intermediate files (ex. /whisper/models)

Images

We have two Docker images available for this project:

  1. ghcr.io/ggerganov/whisper.cpp:main: This image includes the main executable file as well as curl and ffmpeg. (platforms: linux/amd64, linux/arm64)
  2. ghcr.io/ggerganov/whisper.cpp:main-cuda: Same as main but compiled with CUDA support. (platforms: linux/amd64)

Usage

# download model and persist it in a local folder
docker run -it --rm \
  -v path/to/models:/models \
  whisper.cpp:main "./models/download-ggml-model.sh base /models"
# transcribe an audio file
docker run -it --rm \
  -v path/to/models:/models \
  -v path/to/audios:/audios \
  whisper.cpp:main "./main -m /models/ggml-base.bin -f /audios/jfk.wav"
# transcribe an audio file in samples folder
docker run -it --rm \
  -v path/to/models:/models \
  whisper.cpp:main "./main -m /models/ggml-base.bin -f ./samples/jfk.wav"

Installing with Conan

You can install pre-built binaries for whisper.cpp or build it from source using Conan. Use the following command:

conan install --requires="whisper-cpp/[*]" --build=missing

For detailed instructions on how to use Conan, please refer to the Conan documentation.

Limitations

  • Inference only

Another example

Here is another example of transcribing a 3:24 min speech in about half a minute on a MacBook M1 Pro, using medium.en model:

Expand to see the result
$ ./main -m models/ggml-medium.en.bin -f samples/gb1.wav -t 8

whisper_init_from_file: loading model from 'models/ggml-medium.en.bin'
whisper_model_load: loading model
whisper_model_load: n_vocab       = 51864
whisper_model_load: n_audio_ctx   = 1500
whisper_model_load: n_audio_state = 1024
whisper_model_load: n_audio_head  = 16
whisper_model_load: n_audio_layer = 24
whisper_model_load: n_text_ctx    = 448
whisper_model_load: n_text_state  = 1024
whisper_model_load: n_text_head   = 16
whisper_model_load: n_text_layer  = 24
whisper_model_load: n_mels        = 80
whisper_model_load: f16           = 1
whisper_model_load: type          = 4
whisper_model_load: mem required  = 1720.00 MB (+   43.00 MB per decoder)
whisper_model_load: kv self size  =   42.00 MB
whisper_model_load: kv cross size =  140.62 MB
whisper_model_load: adding 1607 extra tokens
whisper_model_load: model ctx     = 1462.35 MB
whisper_model_load: model size    = 1462.12 MB

system_info: n_threads = 8 / 10 | AVX = 0 | AVX2 = 0 | AVX512 = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | VSX = 0 |

main: processing 'samples/gb1.wav' (3179750 samples, 198.7 sec), 8 threads, 1 processors, lang = en, task = transcribe, timestamps = 1 ...


[00:00:00.000 --> 00:00:08.000]   My fellow Americans, this day has brought terrible news and great sadness to our country.
[00:00:08.000 --> 00:00:17.000]   At nine o'clock this morning, Mission Control in Houston lost contact with our Space Shuttle Columbia.
[00:00:17.000 --> 00:00:23.000]   A short time later, debris was seen falling from the skies above Texas.
[00:00:23.000 --> 00:00:29.000]   The Columbia's lost. There are no survivors.
[00:00:29.000 --> 00:00:32.000]   On board was a crew of seven.
[00:00:32.000 --> 00:00:39.000]   Colonel Rick Husband, Lieutenant Colonel Michael Anderson, Commander Laurel Clark,
[00:00:39.000 --> 00:00:48.000]   Captain David Brown, Commander William McCool, Dr. Kultna Shavla, and Ilan Ramon,
[00:00:48.000 --> 00:00:52.000]   a colonel in the Israeli Air Force.
[00:00:52.000 --> 00:00:58.000]   These men and women assumed great risk in the service to all humanity.
[00:00:58.000 --> 00:01:03.000]   In an age when space flight has come to seem almost routine,
[00:01:03.000 --> 00:01:07.000]   it is easy to overlook the dangers of travel by rocket
[00:01:07.000 --> 00:01:12.000]   and the difficulties of navigating the fierce outer atmosphere of the Earth.
[00:01:12.000 --> 00:01:18.000]   These astronauts knew the dangers, and they faced them willingly,
[00:01:18.000 --> 00:01:23.000]   knowing they had a high and noble purpose in life.
[00:01:23.000 --> 00:01:31.000]   Because of their courage and daring and idealism, we will miss them all the more.
[00:01:31.000 --> 00:01:36.000]   All Americans today are thinking as well of the families of these men and women
[00:01:36.000 --> 00:01:40.000]   who have been given this sudden shock and grief.
[00:01:40.000 --> 00:01:45.000]   You're not alone. Our entire nation grieves with you,
[00:01:45.000 --> 00:01:52.000]   and those you love will always have the respect and gratitude of this country.
[00:01:52.000 --> 00:01:56.000]   The cause in which they died will continue.
[00:01:56.000 --> 00:02:04.000]   Mankind is led into the darkness beyond our world by the inspiration of discovery
[00:02:04.000 --> 00:02:11.000]   and the longing to understand. Our journey into space will go on.
[00:02:11.000 --> 00:02:16.000]   In the skies today, we saw destruction and tragedy.
[00:02:16.000 --> 00:02:22.000]   Yet farther than we can see, there is comfort and hope.
[00:02:22.000 --> 00:02:29.000]   In the words of the prophet Isaiah, "Lift your eyes and look to the heavens
[00:02:29.000 --> 00:02:35.000]   who created all these. He who brings out the starry hosts one by one
[00:02:35.000 --> 00:02:39.000]   and calls them each by name."
[00:02:39.000 --> 00:02:46.000]   Because of His great power and mighty strength, not one of them is missing.
[00:02:46.000 --> 00:02:55.000]   The same Creator who names the stars also knows the names of the seven souls we mourn today.
[00:02:55.000 --> 00:03:01.000]   The crew of the shuttle Columbia did not return safely to earth,
[00:03:01.000 --> 00:03:05.000]   yet we can pray that all are safely home.
[00:03:05.000 --> 00:03:13.000]   May God bless the grieving families, and may God continue to bless America.
[00:03:13.000 --> 00:03:19.000]   [Silence]


whisper_print_timings:     fallbacks =   1 p /   0 h
whisper_print_timings:     load time =   569.03 ms
whisper_print_timings:      mel time =   146.85 ms
whisper_print_timings:   sample time =   238.66 ms /   553 runs (    0.43 ms per run)
whisper_print_timings:   encode time = 18665.10 ms /     9 runs ( 2073.90 ms per run)
whisper_print_timings:   decode time = 13090.93 ms /   549 runs (   23.85 ms per run)
whisper_print_timings:    total time = 32733.52 ms

Real-time audio input example

This is a naive example of performing real-time inference on audio from your microphone. The stream tool samples the audio every half a second and runs the transcription continuously. More info is available in issue #10.

make stream
./stream -m ./models/ggml-base.en.bin -t 8 --step 500 --length 5000

https://user-images.githubusercontent.com/1991296/194935793-76afede7-cfa8-48d8-a80f-28ba83be7d09.mp4

Confidence color-coding

Adding the --print-colors argument will print the transcribed text using an experimental color coding strategy to highlight words with high or low confidence:

./main -m models/ggml-base.en.bin -f samples/gb0.wav --print-colors
image

Controlling the length of the generated text segments (experimental)

For example, to limit the line length to a maximum of 16 characters, simply add -ml 16:

$ ./main -m ./models/ggml-base.en.bin -f ./samples/jfk.wav -ml 16

whisper_model_load: loading model from './models/ggml-base.en.bin'
...
system_info: n_threads = 4 / 10 | AVX2 = 0 | AVX512 = 0 | NEON = 1 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 |

main: processing './samples/jfk.wav' (176000 samples, 11.0 sec), 4 threads, 1 processors, lang = en, task = transcribe, timestamps = 1 ...

[00:00:00.000 --> 00:00:00.850]   And so my
[00:00:00.850 --> 00:00:01.590]   fellow
[00:00:01.590 --> 00:00:04.140]   Americans, ask
[00:00:04.140 --> 00:00:05.660]   not what your
[00:00:05.660 --> 00:00:06.840]   country can do
[00:00:06.840 --> 00:00:08.430]   for you, ask
[00:00:08.430 --> 00:00:09.440]   what you can do
[00:00:09.440 --> 00:00:10.020]   for your
[00:00:10.020 --> 00:00:11.000]   country.

Word-level timestamp (experimental)

The --max-len argument can be used to obtain word-level timestamps. Simply use -ml 1:

$ ./main -m ./models/ggml-base.en.bin -f ./samples/jfk.wav -ml 1

whisper_model_load: loading model from './models/ggml-base.en.bin'
...
system_info: n_threads = 4 / 10 | AVX2 = 0 | AVX512 = 0 | NEON = 1 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 |

main: processing './samples/jfk.wav' (176000 samples, 11.0 sec), 4 threads, 1 processors, lang = en, task = transcribe, timestamps = 1 ...

[00:00:00.000 --> 00:00:00.320]
[00:00:00.320 --> 00:00:00.370]   And
[00:00:00.370 --> 00:00:00.690]   so
[00:00:00.690 --> 00:00:00.850]   my
[00:00:00.850 --> 00:00:01.590]   fellow
[00:00:01.590 --> 00:00:02.850]   Americans
[00:00:02.850 --> 00:00:03.300]  ,
[00:00:03.300 --> 00:00:04.140]   ask
[00:00:04.140 --> 00:00:04.990]   not
[00:00:04.990 --> 00:00:05.410]   what
[00:00:05.410 --> 00:00:05.660]   your
[00:00:05.660 --> 00:00:06.260]   country
[00:00:06.260 --> 00:00:06.600]   can
[00:00:06.600 --> 00:00:06.840]   do
[00:00:06.840 --> 00:00:07.010]   for
[00:00:07.010 --> 00:00:08.170]   you
[00:00:08.170 --> 00:00:08.190]  ,
[00:00:08.190 --> 00:00:08.430]   ask
[00:00:08.430 --> 00:00:08.910]   what
[00:00:08.910 --> 00:00:09.040]   you
[00:00:09.040 --> 00:00:09.320]   can
[00:00:09.320 --> 00:00:09.440]   do
[00:00:09.440 --> 00:00:09.760]   for
[00:00:09.760 --> 00:00:10.020]   your
[00:00:10.020 --> 00:00:10.510]   country
[00:00:10.510 --> 00:00:11.000]  .

Speaker segmentation via tinydiarize (experimental)

More information about this approach is available here: https://github.com/ggerganov/whisper.cpp/pull/1058

Sample usage:

# download a tinydiarize compatible model
./models/download-ggml-model.sh small.en-tdrz

# run as usual, adding the "-tdrz" command-line argument
./main -f ./samples/a13.wav -m ./models/ggml-small.en-tdrz.bin -tdrz
...
main: processing './samples/a13.wav' (480000 samples, 30.0 sec), 4 threads, 1 processors, lang = en, task = transcribe, tdrz = 1, timestamps = 1 ...
...
[00:00:00.000 --> 00:00:03.800]   Okay Houston, we've had a problem here. [SPEAKER_TURN]
[00:00:03.800 --> 00:00:06.200]   This is Houston. Say again please. [SPEAKER_TURN]
[00:00:06.200 --> 00:00:08.260]   Uh Houston we've had a problem.
[00:00:08.260 --> 00:00:11.320]   We've had a main beam up on a volt. [SPEAKER_TURN]
[00:00:11.320 --> 00:00:13.820]   Roger main beam interval. [SPEAKER_TURN]
[00:00:13.820 --> 00:00:15.100]   Uh uh [SPEAKER_TURN]
[00:00:15.100 --> 00:00:18.020]   So okay stand, by thirteen we're looking at it. [SPEAKER_TURN]
[00:00:18.020 --> 00:00:25.740]   Okay uh right now uh Houston the uh voltage is uh is looking good um.
[00:00:27.620 --> 00:00:29.940]   And we had a a pretty large bank or so.

Karaoke-style movie generation (experimental)

The main example provides support for output of karaoke-style movies, where the currently pronounced word is highlighted. Use the -wts argument and run the generated bash script. This requires to have ffmpeg installed.

Here are a few "typical" examples:

./main -m ./models/ggml-base.en.bin -f ./samples/jfk.wav -owts
source ./samples/jfk.wav.wts
ffplay ./samples/jfk.wav.mp4

https://user-images.githubusercontent.com/1991296/199337465-dbee4b5e-9aeb-48a3-b1c6-323ac4db5b2c.mp4


./main -m ./models/ggml-base.en.bin -f ./samples/mm0.wav -owts
source ./samples/mm0.wav.wts
ffplay ./samples/mm0.wav.mp4

https://user-images.githubusercontent.com/1991296/199337504-cc8fd233-0cb7-4920-95f9-4227de3570aa.mp4


./main -m ./models/ggml-base.en.bin -f ./samples/gb0.wav -owts
source ./samples/gb0.wav.wts
ffplay ./samples/gb0.wav.mp4

https://user-images.githubusercontent.com/1991296/199337538-b7b0c7a3-2753-4a88-a0cd-f28a317987ba.mp4


Video comparison of different models

Use the scripts/bench-wts.sh script to generate a video in the following format:

./scripts/bench-wts.sh samples/jfk.wav
ffplay ./samples/jfk.wav.all.mp4

https://user-images.githubusercontent.com/1991296/223206245-2d36d903-cf8e-4f09-8c3b-eb9f9c39d6fc.mp4


Benchmarks

In order to have an objective comparison of the performance of the inference across different system configurations, use the bench tool. The tool simply runs the Encoder part of the model and prints how much time it took to execute it. The results are summarized in the following Github issue:

Benchmark results

Additionally a script to run whisper.cpp with different models and audio files is provided bench.py.

You can run it with the following command, by default it will run against any standard model in the models folder.

python3 scripts/bench.py -f samples/jfk.wav -t 2,4,8 -p 1,2

It is written in python with the intention of being easy to modify and extend for your benchmarking use case.

It outputs a csv file with the results of the benchmarking.

ggml format

The original models are converted to a custom binary format. This allows to pack everything needed into a single file:

  • model parameters
  • mel filters
  • vocabulary
  • weights

You can download the converted models using the models/download-ggml-model.sh script or manually from here:

For more details, see the conversion script models/convert-pt-to-ggml.py or models/README.md.

Bindings

Examples

There are various examples of using the library for different projects in the examples folder. Some of the examples are even ported to run in the browser using WebAssembly. Check them out!

ExampleWebDescription
mainwhisper.wasmTool for translating and transcribing audio using Whisper
benchbench.wasmBenchmark the performance of Whisper on your machine
streamstream.wasmReal-time transcription of raw microphone capture
commandcommand.wasmBasic voice assistant example for receiving voice commands from the mic
wchesswchess.wasmVoice-controlled chess
talktalk.wasmTalk with a GPT-2 bot
talk-llamaTalk with a LLaMA bot
whisper.objciOS mobile application using whisper.cpp
whisper.swiftuiSwiftUI iOS / macOS application using whisper.cpp
whisper.androidAndroid mobile application using whisper.cpp
whisper.nvimSpeech-to-text plugin for Neovim
generate-karaoke.shHelper script to easily generate a karaoke video of raw audio capture
livestream.shLivestream audio transcription
yt-wsp.shDownload + transcribe and/or translate any VOD (original)
serverHTTP transcription server with OAI-like API

Discussions

If you have any kind of feedback about this project feel free to use the Discussions section and open a new topic. You can use the Show and tell category to share your own projects that use whisper.cpp. If you have a question, make sure to check the Frequently asked questions (#126) discussion.