-
Notifications
You must be signed in to change notification settings - Fork 10.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make run without error but ./model folder is empty #7
Comments
Same. I can't perform this step.
|
Download models from here: https://huggingface.co/nyanko7/LLaMA-7B |
We have to organize these files according to the project convention:
|
Works now. Thanks. |
SlyEcho
pushed a commit
to SlyEcho/llama.cpp
that referenced
this issue
May 31, 2023
Reuse format_generation_settings for logging.
cebtenzzre
added a commit
to cebtenzzre/llama.cpp
that referenced
this issue
Nov 7, 2023
chsasank
pushed a commit
to chsasank/llama.cpp
that referenced
this issue
Dec 20, 2023
* add our readme * add live demo video * update README --------- Co-authored-by: syx <[email protected]>
chsasank
pushed a commit
to chsasank/llama.cpp
that referenced
this issue
Dec 20, 2023
* remove warning at gpu split * remove dead code * adaptive sparsity threshold reading from model file * convert models with sparse threshold
4 tasks
This was referenced Apr 7, 2024
Closed
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Did I miss anything?
The text was updated successfully, but these errors were encountered: