Skip to content

UPSTREAM PR #18375: Fix a crash on multiple active LoRa (issue 18050)#704

Open
loci-dev wants to merge 2 commits intomainfrom
upstream-PR18375-branch_byko3y-fix-multi-lora-crash-18050
Open

UPSTREAM PR #18375: Fix a crash on multiple active LoRa (issue 18050)#704
loci-dev wants to merge 2 commits intomainfrom
upstream-PR18375-branch_byko3y-fix-multi-lora-crash-18050

Conversation

@loci-dev
Copy link

Mirrored from ggml-org/llama.cpp#18375

Split command line parameters and runtime adapter info into different struct-s.
Bump max graph size according to LoRa count and tensor size.
Fixes bug ggml-org/llama.cpp#18050

Split command line parameters and runtime adapter info into different struct-s.
Bump max graph size according to LoRa count and tensor size.
@loci-dev loci-dev force-pushed the main branch 26 times, most recently from 220f305 to 10ba485 Compare December 29, 2025 19:07
@loci-dev loci-dev force-pushed the main branch 30 times, most recently from 9297073 to c76f9f8 Compare January 5, 2026 09:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants