forked from ggml-org/llama.cpp
-
Notifications
You must be signed in to change notification settings - Fork 462
Issues: LostRuins/koboldcpp
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Feature Request: Default Generation Parameters for API Endpoints
enhancement
New feature or request
#1525
opened May 5, 2025 by
hronoas
GUI model selection popup doesnt work anymore
bug
Something isn't working
changes needed
help wanted
Extra attention is needed
high priority
needs review
needs review
#1514
opened Apr 30, 2025 by
stubkan
Any way to circumvent the virtual limit on --defaultgenamount?
#1513
opened Apr 30, 2025 by
jhemmond
Can't get plamo-13b to run under any circumstances. Is it not supported?
#1506
opened Apr 27, 2025 by
Alexamenus
CUDA error: function failed to launch on multi GPU in ggml-cuda during matrix multiplication
#1497
opened Apr 21, 2025 by
riunxaio
Request to merge Microsoft/BitNet functionality, as it is fast, up-to-date, and necessary
#1491
opened Apr 19, 2025 by
windkwbs
--cli mode causes koboldcpp to close instantly or experience an error once input is sent.
#1482
opened Apr 14, 2025 by
wildwolf256
GGML_ASSERT(cgraph->n_nodes < cgraph->size) failed - New Version
#1481
opened Apr 14, 2025 by
DerRehberg
Gemma 3 + mmproj + flashattention falls back to CPU decoding when using --quantkv
#1473
opened Apr 8, 2025 by
vlawhern
Previous Next
ProTip!
Add no:assignee to see everything that’s not assigned.