Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature Request: An option to NOT use faster-whisper (for intel, amd, mali and other GPUs) #37

Closed
tomich opened this issue May 30, 2024 · 1 comment
Assignees
Labels
enhancement New feature or request

Comments

@tomich
Copy link
Contributor

tomich commented May 30, 2024

faster-whisper relies on ctransformers2, which only work on NVIDIA cards at the moment.

With normal whisper (whisper library from OpenAI or whisper imported from transformers library) you can use GPU accelerated transcription on any card that reports cuda. For example I'm using pytorch-rocm and can use cude accelerated pytorch with my AMD 6900XT.

It would be great to have an option when calling Transcriptor class to pass a variable indicating if it should use faster-whisper or regular whisper. (It would be amazing if it could autodetect, but as this things change week to week I believe a variable would be enough for this)

Thank you!

@NavodPeiris NavodPeiris self-assigned this Jun 3, 2024
@NavodPeiris NavodPeiris added the enhancement New feature or request label Jun 3, 2024
@NavodPeiris
Copy link
Owner

NavodPeiris commented Jun 3, 2024

this was fixed in release 1.1.2

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants