-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add llama2-uncensored LLM #1096
Conversation
Wow, excited to test this out! |
instead of this it would be easier to integrate #1114 (respectfully) as that would give users the chance to host any model for themselves or to connect to a server that does. |
@SubGlitch1, the proposed PR updates the URLs for the LLM model. It's important to note that the responses from ChatGPT and local LLMs may differ slightly, potentially leading to parsing errors in the code. However, modifying the model URL in this PR is feasible. While people might choose different machines for hosting and serving LLMs, it was a deliberate design choice for my site to require local LLM configurations from sources like environment variables or a config file. Although these aren't included in the initial proof of concept, we plan to integrate environment variables or config files once the first PoC is successfully implemented. |
you are absolutely right saying that the responses from custom apis might differ. that is why i proposed to add support for text-generation-webui as that projects api has the same scheme as openai. either way the options dont have to rule each other out an integrated llm is also useful. good luck |
I close this one in favor of #1116 because it's based on master branch of @ErdemOzgen fork |
This PR fixes #1035.