-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[GPT4ALL] Leverage already-downloaded gpt4all models from official GPT4ALL desktop client #84
Comments
I missed the fact that you already simply provide a link to the model you desire, as shown in the Let me know if I'm missing something! |
Thanks for the ticket @zudsniper The way you describe it is indeed ideal. That way we don't have to download from the passed URL every time; reducing the time it takes to get to something useful. If we were doing purely docker we would have used volumes and mounts but Kurtosis doesn't support that. There's a function to upload files but thats limited to 100MB. I am chatting with a colleague to figure out why we have the 100MB limit; if we can get past it (or support mounts in the future) the workflow could look like
I am tracking this :) |
Hey @zudsniper - I'm Kevin, one of the Kurtosis cofounders, and I first off wanted to say thank you for taking the time to put as much detail as you did in this ticket and the other one - we're trying to gather all the product feedback we can right now, and your tickets are hugely helpful! Second, re. the ticket - this idea of "persisting data" (which in this case would be a GPT model) is something we've been thinking deeply about so I wanted to test a prototype: let's say that Kurtosis had the ability to mount files on your local computer's filesystem into the enclave (analogous to Docker's bind-mounts). Would this provide the missing functionality? (PS checked out your Soundcloud and dug it - you've got a good voice, I liked the composition choices, and the vibe in general is interesting. Jamming to Modernity USA literally right now while writing this) |
Hello @mieubrisse !
It is an interesting predicament, the idea of "persisting" for an instance is complicated as you then have to make that instance stateful in some way, which is a surprisingly complicated problem when working with the real world in my experience.
thank you very much man, it means a lot! C: I have a fair bit of music out on streaming under the moniker "phantom fanboy", soundcloud is kinda where I dump random things -- I need to get back to music, but I've been making excuses. But I gotta. Anyway, though I really haven't had the free time to use this package the way I intend to, I have at least 3 distinct large scale ideas I sincerely wish I could drop everything and work on. Looking forward to hopefully meeting you in our meeting Tuesday morning. Cheers C: |
Hi Kurtosis team C:
Thank you for adding
gpt4all
model usage support!With it, comes the problem of model size. I think an elegant solution / enhancement, which I believe is possible, would be the usage of any models which were already downloaded by a user using the standard GPT4ALL client application. This, with the help of LocalAI of course, is doable as far as I can tell, and would save a lot of time, as well as integrate seamlessly with gpt4all.
This makes more sense in a MacOS or Windows environment, wherein a desktop environment is much more likely to be involved, a la this example photo showing the GPT4ALL client application on Windows 10.
chat_RwyFxpSIWz.mp4
However, of course, especially with the use-cases that
autogpt
garners, it is naive to assume instances will have a Desktop installed at all.gpt4all
is guilty of this, as their README offers no CLI instructions, with even theirbuild_and_run
instructions being extremely visual, requiring specific dependencies and applications which are not CLI friendly, etc.Perhaps it is not easily possible as I am thinking that it is, and this is the reason that
gpt4all
haven't provided instructions as to the process. But personally I think that, especially with that aforementionedbuild_and_run
explanation, implement a system that allows users to download gpt4all models throughkurtosis
iself,1 once per model, and then access / utilize them inautogpt-package
for use as desired.Once again, thank you guys for making this already extremely complicated field a lot more approachable. Haven dove head-first into this stuff a while ago, I am very happy to see you guys working on a project like this, and I really appreciate the way in which you have responded to feedback. This project needs more eyes on it.
Jason
Footnotes
perhaps not directly through the
kurtosis
CLI, but through a subcommand of theautogpt-package
, or something along those lines. ↩The text was updated successfully, but these errors were encountered: