Skip to content

Bump llama-cpp-python to 0.2.19 & add min_p and typical_p parameters to llama.cpp loader#4701

Merged
oobabooga merged 3 commits into
devfrom
bump-llamacpp-python
Nov 21, 2023
Merged

Bump llama-cpp-python to 0.2.19 & add min_p and typical_p parameters to llama.cpp loader#4701
oobabooga merged 3 commits into
devfrom
bump-llamacpp-python

Conversation

@oobabooga
Copy link
Copy Markdown
Owner

No description provided.

@mclassen
Copy link
Copy Markdown

I'm getting the following error when trying to use it with a GGUF model:

Traceback (most recent call last):
  File "/workspace/text-generation-webui/modules/callbacks.py", line 57, in gentask
    ret = self.mfunc(callback=_callback, *args, **self.kwargs)
  File "/workspace/text-generation-webui/modules/llamacpp_model.py", line 141, in generate
    completion_chunks = self.model.create_completion(
TypeError: Llama.create_completion() got an unexpected keyword argument 'min_p'

@vmajor
Copy link
Copy Markdown

vmajor commented Nov 29, 2023

Confirming. Still broken on latest pull and latest requirements.txt

@mjameson
Copy link
Copy Markdown

mjameson commented Dec 13, 2023

Cannot wait until we do this again and get this in the mix: ggml-org/llama.cpp#4406

Excited to play around with TheBloke/Mixtral-8x7B-v0.1-GGUF in text-generation-webui.

Thanks guys!!!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants