Releases: Purfview/whisper-standalone-win
Faster-Whisper-XXL r192.3.4
Standalone Faster-Whisper implementation using optimized CTranslate2 models.
Includes all Standalone Faster-Whisper features +some additional ones.
Includes all needed libs.
Read there ->#231
Some new stuff in r192.3.4:
New feature: Write intermediate files to temp folder. [ignored on dump]
New feature: Can take JSON files as input to generate subtitles from it according to the settings.
New alternative VAD method : pyannote_v3
[previous same named option renamed to "pyannote_onnx_v3"]
New arg: --nullify_non_speech
[for WIP experiments]
Link to the changelog.
Faster-Whisper r192.3
Standalone Faster-Whisper implementation using optimized CTranslate2 models.
GPU execution requires cuBLAS and cuDNN 8.x libs for CUDA v11.x .
Last included commit: #192
Some new stuff in r192.3:
Bugfix: 'one_word' was broken in r192.2.
Link to the changelog.
Whisper-OpenAI r136
cuBLAS and cuDNN
Place libs in the same folder where Faster-Whisper executable is. Or to:
Windows: To System32
dir.
Linux: To dir in LD_LIBRARY_PATH
env.
.7z
vs .zip
- archives contain same files.
v2
is the last with support for GPUs with Kepler chip.