This repository provides fast automatic speech recognition (70x realtime with large-v2) with word-level timestamps and speaker diarization.
- β‘οΈ Batched inference for 70x realtime transcription using whisper large-v2
- πͺΆ faster-whisper backend, requires <8GB gpu memory for large-v2 with beam_size=5
- π― Accurate word-level timestamps using wav2vec2 alignment
- π―ββοΈ Multispeaker ASR using speaker diarization from pyannote-audio (speaker ID labels)
- π£οΈ VAD preprocessing, reduces hallucination & batching with no WER degradation
Whisper is an ASR model developed by OpenAI, trained on a large dataset of diverse audio. Whilst it does produces highly accurate transcriptions, the corresponding timestamps are at the utterance-level, not per word, and can be inaccurate by several seconds. OpenAI's whisper does not natively support batching.
Phoneme-Based ASR A suite of models finetuned to recognise the smallest unit of speech distinguishing one word from another, e.g. the element p in "tap". A popular example model is wav2vec2.0.
Forced Alignment refers to the process by which orthographic transcriptions are aligned to audio recordings to automatically generate phone level segmentation.
Voice Activity Detection (VAD) is the detection of the presence or absence of human speech.
Speaker Diarization is the process of partitioning an audio stream containing human speech into homogeneous segments according to the identity of each speaker.
- 1st place at Ego4d transcription challenge π
- WhisperX accepted at INTERSPEECH 2023
- v3 transcript segment-per-sentence: using nltk sent_tokenize for better subtitlting & better diarization
- v3 released, 70x speed-up open-sourced. Using batched whisper with faster-whisper backend!
- v2 released, code cleanup, imports whisper library VAD filtering is now turned on by default, as in the paper.
- Paper dropππ¨βπ«! Please see our ArxiV preprint for benchmarking and details of WhisperX. We also introduce more efficient batch inference resulting in large-v2 with *60-70x REAL TIME speed.
GPU execution requires the NVIDIA libraries cuBLAS 11.x and cuDNN 8.x to be installed on the system. Please refer to the CTranslate2 documentation.
conda create --name whisperx python=3.10
conda activate whisperx
conda install pytorch==2.0.0 torchaudio==2.0.0 pytorch-cuda=11.8 -c pytorch -c nvidia
See other methods here.
pip install git+https://github.com/m-bain/whisperx.git
If already installed, update package to most recent commit
pip install git+https://github.com/m-bain/whisperx.git --upgrade
If wishing to modify this package, clone and install in editable mode:
$ git clone https://github.com/m-bain/whisperX.git
$ cd whisperX
$ pip install -e .
You may also need to install ffmpeg, rust etc. Follow openAI instructions here https://github.com/openai/whisper#setup.
To enable Speaker Diarization, include your Hugging Face access token (read) that you can generate from Here after the --hf_token
argument and accept the user agreement for the following models: Segmentation and Speaker-Diarization-3.1 (if you choose to use Speaker-Diarization 2.x, follow requirements here instead.)
Note
As of Oct 11, 2023, there is a known issue regarding slow performance with pyannote/Speaker-Diarization-3.0 in whisperX. It is due to dependency conflicts between faster-whisper and pyannote-audio 3.0.0. Please see this issue for more details and potential workarounds.
Run whisper on example segment (using default params, whisper small) add --highlight_words True
to visualise word timings in the .srt file.
whisperx examples/sample01.wav
Result using WhisperX with forced alignment to wav2vec2.0 large:
sample01.mp4
Compare this to original whisper out the box, where many transcriptions are out of sync:
sample_whisper_og.mov
For increased timestamp accuracy, at the cost of higher gpu mem, use bigger models (bigger alignment model not found to be that helpful, see paper) e.g.
whisperx examples/sample01.wav --model large-v2 --align_model WAV2VEC2_ASR_LARGE_LV60K_960H --batch_size 4
To label the transcript with speaker ID's (set number of speakers if known e.g. --min_speakers 2
--max_speakers 2
):
whisperx examples/sample01.wav --model large-v2 --diarize --highlight_words True
To run on CPU instead of GPU (and for running on Mac OS X):
whisperx examples/sample01.wav --compute_type int8
The phoneme ASR alignment model is language-specific, for tested languages these models are automatically picked from torchaudio pipelines or huggingface.
Just pass in the --language
code, and use the whisper --model large
.
Currently default models provided for {en, fr, de, es, it, ja, zh, nl, uk, pt}
. If the detected language is not in this list, you need to find a phoneme-based ASR model from huggingface model hub and test it on your data.
whisperx --model large-v2 --language de examples/sample_de_01.wav
sample_de_01_vis.mov
See more examples in other languages here.
import whisperx
import gc
device = "cuda"
audio_file = "audio.mp3"
batch_size = 16 # reduce if low on GPU mem
compute_type = "float16" # change to "int8" if low on GPU mem (may reduce accuracy)
# 1. Transcribe with original whisper (batched)
model = whisperx.load_model("large-v2", device, compute_type=compute_type)
# save model to local path (optional)
# model_dir = "/path/"
# model = whisperx.load_model("large-v2", device, compute_type=compute_type, download_root=model_dir)
audio = whisperx.load_audio(audio_file)
result = model.transcribe(audio, batch_size=batch_size)
print(result["segments"]) # before alignment
# delete model if low on GPU resources
# import gc; gc.collect(); torch.cuda.empty_cache(); del model
# 2. Align whisper output
model_a, metadata = whisperx.load_align_model(language_code=result["language"], device=device)
result = whisperx.align(result["segments"], model_a, metadata, audio, device, return_char_alignments=False)
print(result["segments"]) # after alignment
# delete model if low on GPU resources
# import gc; gc.collect(); torch.cuda.empty_cache(); del model_a
# 3. Assign speaker labels
diarize_model = whisperx.DiarizationPipeline(use_auth_token=YOUR_HF_TOKEN, device=device)
# add min/max number of speakers if known
diarize_segments = diarize_model(audio)
# diarize_model(audio, min_speakers=min_speakers, max_speakers=max_speakers)
result = whisperx.assign_word_speakers(diarize_segments, result)
print(diarize_segments)
print(result["segments"]) # segments are now assigned speaker IDs
If you don't have access to your own GPUs, use the links above to try out WhisperX.
For specific details on the batching and alignment, the effect of VAD, as well as the chosen alignment model, see the preprint paper.
To reduce GPU memory requirements, try any of the following (2. & 3. can affect quality):
- reduce batch size, e.g.
--batch_size 4
- use a smaller ASR model
--model base
- Use lighter compute type
--compute_type int8
Transcription differences from openai's whisper:
- Transcription without timestamps. To enable single pass batching, whisper inference is performed
--without_timestamps True
, this ensures 1 forward pass per sample in the batch. However, this can cause discrepancies the default whisper output. - VAD-based segment transcription, unlike the buffered transcription of openai's. In Wthe WhisperX paper we show this reduces WER, and enables accurate batched inference
--condition_on_prev_text
is set toFalse
by default (reduces hallucination)
- Transcript words which do not contain characters in the alignment models dictionary e.g. "2014." or "Β£13.60" cannot be aligned and therefore are not given a timing.
- Overlapping speech is not handled particularly well by whisper nor whisperx
- Diarization is far from perfect
- Language specific wav2vec2 model is needed
If you are multilingual, a major way you can contribute to this project is to find phoneme models on huggingface (or train your own) and test them on speech for the target language. If the results look good send a pull request and some examples showing its success.
Bug finding and pull requests are also highly appreciated to keep this project going, since it's already diverging from the original research scope.
-
Multilingual init
-
Automatic align model selection based on language detection
-
Python usage
-
Incorporating speaker diarization
-
Model flush, for low gpu mem resources
-
Faster-whisper backend
-
Add max-line etc. see (openai's whisper utils.py)
-
Sentence-level segments (nltk toolbox)
-
Improve alignment logic
-
update examples with diarization and word highlighting
-
Subtitle .ass output <- bring this back (removed in v3)
-
Add benchmarking code (TEDLIUM for spd/WER & word segmentation)
-
Allow silero-vad as alternative VAD option
-
Improve diarization (word level). Harder than first thought...
Contact [email protected] for queries.
This work, and my PhD, is supported by the VGG (Visual Geometry Group) and the University of Oxford.
Of course, this is builds on openAI's whisper. Borrows important alignment code from PyTorch tutorial on forced alignment And uses the wonderful pyannote VAD / Diarization https://github.com/pyannote/pyannote-audio
Valuable VAD & Diarization Models from [pyannote audio][https://github.com/pyannote/pyannote-audio]
Great backend from faster-whisper and CTranslate2
Those who have supported this work financially π
Finally, thanks to the OS contributors of this project, keeping it going and identifying bugs.
If you use this in your research, please cite the paper:@article{bain2022whisperx,
title={WhisperX: Time-Accurate Speech Transcription of Long-Form Audio},
author={Bain, Max and Huh, Jaesung and Han, Tengda and Zisserman, Andrew},
journal={INTERSPEECH 2023},
year={2023}
}