-
-
Notifications
You must be signed in to change notification settings - Fork 2.4k
Support for Microsoft Phi-2 model #1438
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
|
this should actually just work with current LocalAI: https://huggingface.co/TheBloke/phi-2-GGUF |
This works here with current master and this YAML config (note GPU settings): name: phi-2
context_size: 2048
f16: true
threads: 11
gpu_layers: 90
mmap: true
parameters:
model: huggingface://TheBloke/phi-2-GGUF/phi-2.Q8_0.gguf
temperature: 0.2
top_k: 40
top_p: 0.95
template:
chat: &template |
Instruct: {{.Input}}
Output:
completion: *template |
keeping it open just to have the example added here https://github.com/mudler/LocalAI/tree/master/examples/configurations |
https://huggingface.co/microsoft/phi-2
The text was updated successfully, but these errors were encountered: