You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I quantized the llama 7b-chat model by llama.cpp, and get model ggml-model-q4_0.gguf. But llama.go seems not support the gguf version,
it shows the error:
`
[ERROR] Invalid model file '../llama.cpp/models/7B/ggml-model-q4_0.gguf'! Wrong MAGIC in header
[ ERROR ] Failed to load model "../llama.cpp/models/7B/ggml-model-q4_0.gguf"
`
The text was updated successfully, but these errors were encountered:
Llama.cpp project is in an active development, and from time to time it introduces breaking changes (in gguf format too). This project stopped its development around April 2023, so It probably isn't useful with today's models.
I quantized the llama 7b-chat model by llama.cpp, and get model ggml-model-q4_0.gguf. But llama.go seems not support the gguf version,
it shows the error:
`
[ERROR] Invalid model file '../llama.cpp/models/7B/ggml-model-q4_0.gguf'! Wrong MAGIC in header
[ ERROR ] Failed to load model "../llama.cpp/models/7B/ggml-model-q4_0.gguf"
`
The text was updated successfully, but these errors were encountered: