-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How could I use a build of whisper.cpp with CUBLAS #19
Comments
Yes, something like #20. Right now I'm stuck on
It's the same issue that's described in ggerganov/whisper.cpp#840 but I already have the fix described as the submodule is at a later commit than ggerganov/whisper.cpp#867 🫤 It sounds like something in that I don't have a device to test this on so I'm stuck with a 20 minute or more feedback cycle with GitHub actions. If you have a moment and have a device to test with, it would be helpful if you could try to identify what's wrong with that comand. |
Sorry, I wasn't aware this library is Windows only. The machine with GPU that I intend to run Whisper on is Linux. |
It's something I'd like to add, #6 You could:
|
I've tried to get build to work on Linux, but then had a problem of how to cleanly install Ninja and Vcpkg. Otoh, I appreciate the cleanliness of your approach: a c++ like interface with least added ceremony (I could simply read the original c++ source files and know how to use the dotnet version). |
I'm seeing better performance with CUBLAS and would like to use a version of whisper with CUDA support.
Is it possible to somehow build a custom
whisper.so
with this project and provide custom build parameters?The text was updated successfully, but these errors were encountered: