You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It connects automatically to Ollama on http://localhost:11434 which is great,
but it would be perfect if I was able to connect it to my local LM Studio server on http://localhost:1234.
The API's of the two can be consumed in the same way and are interchangeable as they both use the OpenAI structures.
Both of them use lama.cpp in the background so that shouldn't be a problem either.
We can change the url in the TLM configuration but the problem seems
that TLM expects something of a model and throws an error.
This way we would have full control over the models used and i'm not dependant on ollama.
(I wanted to add a 'suggestion' label but I wasn't able to, i'm sorry)
The text was updated successfully, but these errors were encountered:
It connects automatically to Ollama on http://localhost:11434 which is great,
but it would be perfect if I was able to connect it to my local LM Studio server on http://localhost:1234.
The API's of the two can be consumed in the same way and are interchangeable as they both use the OpenAI structures.
Both of them use lama.cpp in the background so that shouldn't be a problem either.
We can change the url in the TLM configuration but the problem seems
that TLM expects something of a model and throws an error.
This way we would have full control over the models used and i'm not dependant on ollama.
(I wanted to add a 'suggestion' label but I wasn't able to, i'm sorry)
The text was updated successfully, but these errors were encountered: