How to run my custom model #1992
-
Hi! I swear that I have read the docs but I can not yet make localAI to use my model. Lets suppose that we are using the fantastic mlabonne's NeuralHermes I have copied the gguf file into models. Seems to be there {"object":"list","data":[{"id":"gpt-4","object":"model"},{"id":"gpt-4-vision-preview","object":"model"},{"id":"stablediffusion","object":"model"},{"id":"text-embedding-ada-002","object":"model"},{"id":"tts-1","object":"model"},{"id":"whisper-1","object":"model"},{"id":"MODEL_CARD","object":"model"},{"id":"llava-v1.6-7b-mmproj-f16.gguf","object":"model"},{"id":"neuralhermes-2.5-mistral-7b.Q6_K.gguf","object":"model"},{"id":"voice-en-us-amy-low.tar.gz","object":"model"}]} However when I call it I have the error
I have tried to setup a yaml file but the instructions are unclear. My machine is With GPT-4 model that seems to be an Hermes-2-Pro-Mistral-7B.Q6_K.gguf (precursor of NeuralHermes) It works What Im doing wrong? |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 5 replies
-
@DavidGOrtega can you show the LocalAI logs with debug enabled? ( just set |
Beta Was this translation helpful? Give feedback.
-
Silly me I downloaded a wrong file. The HF raw link 🤦 the clue can be seen clearly in the logs
|
Beta Was this translation helpful? Give feedback.
Silly me I downloaded a wrong file. The HF raw link 🤦
the clue can be seen clearly in the logs