-
Notifications
You must be signed in to change notification settings - Fork 77
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I keep getting Error while generation text: Request failed, status 404 #12
Comments
@Nestlium The model uses |
@8bitbuddhist Thanks. |
I still can't figure this out, could you explain what you did? What is the 'new command' and 'proper model name' you used. |
In the plugin settings, look for the option "New Command Model", and enter the name of the LLM model you're using. This is the same name you'd use when running For example, to run Mistral: |
Additional to @8bitbuddhist steps.
|
it would be great if the plugin allowed the user to choose the model from a drop down menu |
Facing the same error No network call shows up in network panel. Sending a request manually works Edit : Fixed by removing the |
Manually overriding the llama2 in the plugin files to llama3 seems to work after a restart of Obsidian. I guess any updates will override this though, so would be nice to have an option to update the model to use via the plugin settings. |
I just can't seem to get this to work. I've got Ollama running and can see that it's running using http://localhost:11434 but I keep getting this error when I try to run it. Is there any other configuration I need to do besides Ollama running and available and the plugin installed?
The text was updated successfully, but these errors were encountered: