Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[REQ] PLEASEEE ADD THE SUPPORT FOR WHSIPER.cpp it will make the transcription so much faster #43

Open
bilalazhar72 opened this issue Aug 21, 2023 · 2 comments
Labels
enhancement New feature or request

Comments

@bilalazhar72
Copy link

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when the model is so slow not everyone is using the fastest gpu i want to refer it to some one i know one of them if a blind person and they own a windows latpop with 2 gigs of vram in their graphics card they are willing to learn about new things as they are Software engineer as well but unable to work and read properly because they cant read they have hard time using their pc , in this way they might be able to transcribe vedios and learn better due to text readers

Describe the solution you'd like
i just want you to keep the app the same just add the support for it to use the whisper.cpp model on cpu and the user have the choice to download whisper.cpp model from this repo https://github.com/bilalazhar72/whisper.cpp/tree/master/models

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.
they show it being used on the i phone 13 and doing a good job the model seems to be very fast and accurate , there are currently no alternatives to use this app on the windows since you have already have everything setup i would just like it to have two features load the model version and let user speak in voice and outputs the texts (even long text) and also load any model size and let you give an mp3 file to transcribe since whisper models are show this implementation in .cpp will make sure that the model can be ran on any hardware and people will start using this app for this reason as well

Additional context
Add any other context or screenshots about the feature request here.here is just the vedio of them running it on in the i phone 13 and the model working flawlessly i am not good at coading since i am beginner but it seems like everything is linked to one file and can be ran easily as well

@bilalazhar72 bilalazhar72 added the enhancement New feature or request label Aug 21, 2023
@Dadangdut33
Copy link
Owner

Dadangdut33 commented Sep 8, 2023

Hey thanks for the long and detailed post, and sorry for the very late reply.

i've seen whisper.cpp but never actually got to try it but I'll be sure to try adding this in the future. Right now i'm trying to integrate whisper_timestamped stable-ts and fixing the bugs while also improving the app experience in terms of UI and performance

Once again, thanks for the suggestion 👍

@bilalazhar72
Copy link
Author

Once again, thanks for the suggestion 👍 cool thanks for reading that i hope i get to see this come to life in future 💯 have a good day

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants