Replies: 3 comments 2 replies
-
You posted this in Bindings, so it's mildly confusing as well (language bindings are not quite yet up-to-date and you can choose a model; specifically, the Python SDK v2.7.0 has not yet incorporated recent changes made in the GUI and I don't know what, if anything, will change in the next version). However, this looks like a question regarding the chat application in v3.0.0 or higher. At least for the time being the name of the embedding model is hardcoded, see logic in gpt4all/gpt4all-chat/embllm.cpp Lines 33 to 34 in 6b8e0f7 As a hacky workaround, you could probably try renaming your quantised one to that -- it might work, or maybe it won't because there are other checks somewhere, I don't know. |
Beta Was this translation helpful? Give feedback.
-
What i've experienced: using the Q5_K_M Version is somewhat quicker, with not too worse results. |
Beta Was this translation helpful? Give feedback.
-
Hello, On the same topic, is it in the roadmap to be able to configure its own embedding model (mpnet for example), like we do for the chat models? thx |
Beta Was this translation helpful? Give feedback.
-
Hey,
how can i change the "nomic-embed-text-v1.5.f16.gguf" model in "gpt4all/resources" to the Q5_K_M quantized one?
just removing the old one and pasting the new one doesn't work.
Thank you in advance
Lenn
Beta Was this translation helpful? Give feedback.
All reactions