You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Need to add support for Phi-4 around line 4665 in llama.cpp/llama.cpp:
} else if (hparams.n_layer == 40 && hparams.n_ctx_train == 16384) {
// default value for Phi-4
hparams.n_swa = 16384;
}
And, it looks like hparams gets overwritten after this runs. I've verified that it does run with some debug output. The cases for Phi-3 don't matter because the models are configured correctly. Not so for Phi-4. I had to make a change to its config.json and rebuild the GGUF:
Contact Details
No response
What happened?
Llamafile crashes when running microsoft's phi-4 model (https://huggingface.co/microsoft/phi-4-gguf):
Version
v0.9.0
What operating system are you seeing the problem on?
Linux
Relevant log output
No response
The text was updated successfully, but these errors were encountered: