-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
update to ollama 0.4.0 #12370
Comments
i wonder why cant it just be integrated with upstream ollama ? just like nvidia & amd do |
+1 |
Hi @Matthww , could you please let us know which exactly model do you want to run ? |
Hi @rnwang04 like @user7z mentioned I'm talking indeed about the Llama 3.2-Vision collection that can be found on ollama's model page: https://ollama.com/library/llama3.2-vision . Ollama was updated accordingly to support these new vision models with version 0.4.0 that's available on their github repository in the releases tab. |
@Matthww but were toaking here about the one provided with ipex-llm gpu-acceleration wich provides just 0.3.6 ollama , i think you know that ipex-llm is not integrated with the official ollama , so the context here is the accerated ollama provided with ipex-llm in this github repo |
We are planning for a new rebase, if there is any progress, we will update here to let you know. |
@wbste @rnwang04 iam not aware about the techncall details , but i think the propriatary way that ipex-llm works wouldnt make it with ollama , i mean Rocm is open , cuda mostlly open , so thats one high wall between ipex-llm & ollama , & even within intel ecosystem ipex-llm relies on oneAPI basekit , & it take lot of time for devs to be with the latest , i think for this project to continue it needs just two things : |
I strongly recommend making it open source and merging the changes into the official Ollama project. This will attract more contributors and potential donations. |
Hi,
Is it possible that this project could be updated to support ollama 0.4.0. I want to try the new LLama Vision model but to run those models you need atleast version 0.4.0.
Thanks!
The text was updated successfully, but these errors were encountered: