demo-eva.mp4
Boilerplate of whisper.cpp + chatGPT + Electron. Goal is low latency.
Could be extended in many ways :
- add voice output with elevenlabs/Bark
- get text selection as input
- get file selection as input
- use OpenAI functions to execute "tools"
- save "memories" into Prisma db
e.g. : "convert this mov to wav please"
- Expose OPENAI_API_KEY env variable in your shell
- Copy .env.sample to .env
- create
bin
folder in root directory and add required binaries :- whisper -> main program from whisper.cpp
- sox
- at least one ggml whisper model (I use quantized french fine-tunes from here)
- Mac M1 binaries at your own risk
- Expect many bugs and hacks
pnpm i
pnpm dev