-
-
Notifications
You must be signed in to change notification settings - Fork 8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ollama Support #354
Comments
We're definitely interested in adding Ollama support to this project. Thanks for opening this issue. |
I'm also looking forward to this feature! ✨ |
👀 |
Yeah, it will would great to support ollama, LM studio, llama.cpp and more well-known opensource LLMs, like MiniCPM for vision. |
how's it going now? |
👀 |
Ollama already has a OpenAI-compatible API, so all you have to do is change the env variable and create model aliases. Add to
Then create aliases for the models that screenshot-to-code-uses
Then it will call your local models when it makes LLM requests. Change I've not gotten very good success because all I can run is a 13b model but perhaps the more powerful models will work well. Note: you need 0.4.0 release of ollama to run the llama-vision model https://medium.com/@tapanbabbar/how-to-run-llama-3-2-vision-on-ollama-a-game-changer-for-edge-ai-80cb0e8d8928. |
@pdufour thanks for the tip! I'll include this on the README. |
Got this error: |
Thought my ollama is already 0.4.0... Still 0.3.14... |
I tried this model: minicpm-v:latest |
|
Just to clarify for the script kiddies out there. The "for model in" part is not able to be used in the windows command promt. I've only figured it out for powershell. So, open powershell.exe and paste this in: $models = @( |
How does ollama cp works if I'm running ollama in docker vs ollama installed on host? |
For who using Powershell in windows can use script a file extension with .ps1 and run sa administrator to copy model name. but while u use this you need to change sourcemodel full name on ollama list command. # Define the models in an array
$models = @(
"claude-3-5-sonnet-20240620",
"gpt-4o-2024-05-13",
"gpt-4-turbo-2024-04-09",
"gpt_4_vision",
"claude_3_sonnet"
)
# Define the source model
$sourceModel = "llama3.2-vision:11b-instruct-q8_0"
foreach ($model in $models) {
Write-Output "Copying $sourceModel to $model..."
& ollama cp $sourceModel $model
} |
I love your project, I want to use it with local ollama+llava and i tried many way including asking chat gpt.
I am on Windows 11, i tried docker and no go. changed api address from settings in frontend also
and i tested my local ollama+llava answering and running with postman.
changed frontend\src\lib\models.ts
also backend\llm.py
Actual model versions that are passed to the LLMs and stored in our logs
console and backend errors below
If can be use on local server it'll be awesome!
Thanks for consideration
The text was updated successfully, but these errors were encountered: