Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support to connect to LM Studio server #30

Open
D-dezeeuw opened this issue Sep 6, 2024 · 1 comment
Open

Support to connect to LM Studio server #30

D-dezeeuw opened this issue Sep 6, 2024 · 1 comment

Comments

@D-dezeeuw
Copy link

D-dezeeuw commented Sep 6, 2024

It connects automatically to Ollama on http://localhost:11434 which is great,
but it would be perfect if I was able to connect it to my local LM Studio server on http://localhost:1234.

The API's of the two can be consumed in the same way and are interchangeable as they both use the OpenAI structures.
Both of them use lama.cpp in the background so that shouldn't be a problem either.

We can change the url in the TLM configuration but the problem seems
that TLM expects something of a model and throws an error.

This way we would have full control over the models used and i'm not dependant on ollama.

(I wanted to add a 'suggestion' label but I wasn't able to, i'm sorry)

@faeton
Copy link

faeton commented Nov 29, 2024

Bump

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants