-
Notifications
You must be signed in to change notification settings - Fork 932
Pull requests: abetlen/llama-cpp-python
Author
Label
Projects
Milestones
Reviews
Assignee
Sort
Pull requests list
Resync llama_grammar with llama.cpp implementation and use curly braces quantities instead of repetitions
#1721
opened Aug 31, 2024 by
gbloisi-openaire
Loading…
feat: adding support for external chat format contribution
#1716
opened Aug 29, 2024 by
axel7083
Loading…
Allow server to accept openai's new structured output "json_schema" format.
#1677
opened Aug 13, 2024 by
cerealbox
Loading…
Updated README.md, llama_cpp/llama.py and pyproject.toml to add support for cross-encoders
#1605
opened Jul 17, 2024 by
perpendicularai
Loading…
Support images from local storage for Llava models
#1583
opened Jul 9, 2024 by
GokulMuraliRajasekar
Loading…
Change server approach to handle parallel requests
#1550
opened Jun 24, 2024 by
sergey-zinchenko
Loading…
Integrate Functionary v2.5 + Refactor Functionary Code
#1509
opened Jun 5, 2024 by
jeffrey-fong
Loading…
fix: add binding for name in ChatCompletionRequestToolMessage
#1407
opened Apr 29, 2024 by
JDScript
Loading…
Improve function calling (auto selection, parallel functions)
#1351
opened Apr 17, 2024 by
themrzmaster
Loading…
Feature: Lightweight llama_cpp.server Docker Image Build Workflow
#1331
opened Apr 5, 2024 by
devcxl
Loading…
Previous Next
ProTip!
Exclude everything labeled
bug
with -label:bug.