-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
memory requirements #227
Comments
I'm running the 30B alpaca and my memory usage is roughly 78% of my 32GB RAM while in use. |
cpu or gpu ram? |
My PC memory/RAM. It also uses your CPU. As far as I know there are no current configurable settings to use GPU. |
will it work faster with a gpu? |
Please see my above edited comment. |
very strange. Don't these models usually use gpus? |
x |
This project is using llama.cpp/alpaca.cpp which "Runs on the CPU" https://github.com/antimatter15/alpaca.cpp#getting-started-30b |
To train them not to run them. |
does it work as good as chatgpt? Or close? |
I'd say 30B is closing in at about 80% of chat gpt 3.5. 7B/13B maybe 60%+. |
I'd be interested to know what prompts you've tried and what parameter (temperature, etc) values you have. For me even 30B feels like 10% of what I see with ChaGPT 3.5. |
maybe I'm an idiot but I have to ask is below memory requirements for cpu or gpu ram?
Runs on most modern computers. Unless your computer is very very old, it should work.
According to ggerganov/llama.cpp#13, here are the memory requirements:
7B => ~4 GB
13B => ~8 GB
30B => ~16 GB
65B => ~32 GB
The text was updated successfully, but these errors were encountered: