Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

I keep getting Error while generation text: Request failed, status 404 #12

Open
Nestlium opened this issue Nov 7, 2023 · 8 comments
Open

Comments

@Nestlium
Copy link

Nestlium commented Nov 7, 2023

I just can't seem to get this to work. I've got Ollama running and can see that it's running using http://localhost:11434 but I keep getting this error when I try to run it. Is there any other configuration I need to do besides Ollama running and available and the plugin installed?

@8bitbuddhist
Copy link

@Nestlium The model uses llama2 by default. What worked for me was pulling llama2, but there's an option in the plugin settings called New Command Model that should let you change the model used.

@skoyramsPS
Copy link

@8bitbuddhist Thanks.
This was the issue was for me. Once new command and proper model name was used Obisidan and Ollama integration started working.

@Nestlium
Copy link
Author

@8bitbuddhist Thanks. This was the issue was for me. Once new command and proper model name was used Obisidan and Ollama integration started working.

I still can't figure this out, could you explain what you did? What is the 'new command' and 'proper model name' you used.

@8bitbuddhist
Copy link

8bitbuddhist commented Nov 11, 2023

@8bitbuddhist Thanks. This was the issue was for me. Once new command and proper model name was used Obisidan and Ollama integration started working.

I still can't figure this out, could you explain what you did? What is the 'new command' and 'proper model name' you used.

In the plugin settings, look for the option "New Command Model", and enter the name of the LLM model you're using. This is the same name you'd use when running ollama pull [model name] or ollama run [model name]

For example, to run Mistral:

image

@skoyramsPS
Copy link

@8bitbuddhist Thanks. This was the issue was for me. Once new command and proper model name was used Obisidan and Ollama integration started working.

I still can't figure this out, could you explain what you did? What is the 'new command' and 'proper model name' you used.

Additional to @8bitbuddhist steps.
Following troubleshooting steps helped me:

  1. open obsidian console tab ( in ubuntu shortcut to open console ctrl+shift+i)
  2. Go 'Source' tab and look for plugin:Ollama
  3. look for line 225 or text '/api/generate'
  4. add a breakpoint
  5. You will now be able to check the exact URL, model and prompt which would be used to make a API request to Ollama
  6. Create a Curl command similar to one below example below ( replace the values from your use case)
curl -X POST http://localhost:11434/api/generate -d '{
  "model": "mistral",
  "prompt": "Tell me why sky is blue",
  "system": "You are an AI assistant who help answer queries."
}'
  1. Execute the curl command in terminal window and check if you get response back.

Screenshot on where to set the debugger
image

@lockmeister
Copy link

it would be great if the plugin allowed the user to choose the model from a drop down menu

@notV3NOM
Copy link

notV3NOM commented Dec 30, 2023

Facing the same error

No network call shows up in network panel. Sending a request manually works
The variables inside the request body also have the correct values

image

Edit : Fixed by removing the / in the end of Ollama URL which was suggested here
#8 (comment)

@rawzone
Copy link

rawzone commented Apr 23, 2024

Manually overriding the llama2 in the plugin files to llama3 seems to work after a restart of Obsidian.
(The plugin folder is saved in the vault under .obsidian\plugins\ollama.

I guess any updates will override this though, so would be nice to have an option to update the model to use via the plugin settings.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants