Skip to content

fix(server): prevent GGML_ABORT when prompt cache pos_min == -1 for non-standard attention architectures#2

Closed
xczhanjun wants to merge 83 commits into
masterfrom
fix/prompt-cache-pos-min-abort
Closed

fix(server): prevent GGML_ABORT when prompt cache pos_min == -1 for non-standard attention architectures#2
xczhanjun wants to merge 83 commits into
masterfrom
fix/prompt-cache-pos-min-abort

Conversation

@xczhanjun
Copy link
Copy Markdown

Problem

When using non-standard attention architectures (e.g. DeepSeek V4 Flash with CSA+HCA), llama_memory_seq_pos_min() may return -1 even when n_past > 0. The custom KV cache layout is incompatible with standard prompt cache restoration, causing a hard crash:

server-context.cpp:2428: GGML_ABORT("pos_min == -1, but n_past > 0 - should not happen")

This happens consistently with --parallel > 1 --cont-batching on DeepSeek V4 Flash, typically on the 2nd-4th concurrent request.

Fix

Replace GGML_ABORT with graceful fallback: set n_past = 0 and pos_next = 0 to force full prompt re-evaluation. This mirrors the existing do_reset path for SWA/hybrid/recurrent memory models.

Change: SLT_ERRSLT_WRN + pos_next = 0; n_past = 0; instead of GGML_ABORT.

Testing

  • Hardware: 8×A100 80GB NVLink
  • Model: DeepSeek V4 Flash (FP8/FP4 GGUF)
  • 60+ concurrent requests across 3 rounds of Apache Bench (ab -n 20 -c 4)
  • Server config: --parallel 4 --cont-batching --ctx-size 32768
  • Result: Zero crashes. The fallback triggers correctly ("forcing full prompt re-evaluation") and the server recovers and continues serving all requests.

Compatibility

This change is backward-compatible. Standard architectures are unaffected — pos_min returns a valid value for them, and the fallback is never triggered. Non-standard architectures that were previously crashing now gracefully degrade to full re-evaluation (slightly slower for cache-miss cases, but no crash).


Upstream issue reference: ggml-org#13833 (comment)

Target repo: This PR should target nisparks/llama.cpp wip/deepseek-v4-support branch. Please change the base when merging.

trivikram-reddy1 and others added 30 commits April 25, 2026 17:58
* opencl: add general support for iq4_nl

* opencl: add iq4_nl gemm/gemv for adreno

* opencl: pack 2 lut entries into a uint
Also, distribute all elements across CTAs evenly instead of launching
one CTA per dim
…org#22362)

The previous code worked only for full tensor reads and writes and was hitting `GGML_ASSERT(size == ggml_nbytes(tensor)); ` assert when tested with llama-server.
* common: refactor common/debug to move abort_on_nan into base_callback_data

Passing bool abort_on_nan as template parameter for common_debug_cb_eval is unnecessary and creates an issue with LTO.
It should just be a member of the base_callback_data instead.

* cont : cleanup

* common : use pimpl in debug.h to reduce header dependencies

Move common_debug_cb_user_data's data members (std::regex,
std::vector<uint8_t>) into a private impl struct in debug.cpp.

This removes the includes of common.h and <regex> from debug.h,
reducing transitive dependencies for any translation unit that
includes the header.

Assisted-by: llama.cpp:local pi

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
| Model                            | Test   |   t/s OLD |   t/s NEW |   Speedup |
|:---------------------------------|:-------|----------:|----------:|----------:|
| qwen35 0.8B BF16                 | pp512  |    584.59 |    595.41 |      1.02 |
| qwen35 0.8B BF16                 | tg128  |     52.23 |     52.82 |      1.01 |
| qwen35 0.8B IQ2_M - 2.7 bpw      | pp512  |    260.64 |    261.70 |      1.00 |
| qwen35 0.8B IQ2_M - 2.7 bpw      | tg128  |     81.17 |     80.89 |      1.00 |
| qwen35 0.8B IQ2_XXS - 2.0625 bpw | pp512  |    302.36 |    302.56 |      1.00 |
| qwen35 0.8B IQ2_XXS - 2.0625 bpw | tg128  |     84.93 |     85.12 |      1.00 |
| qwen35 0.8B IQ3_XXS - 3.0625 bpw | pp512  |    263.22 |    260.01 |      0.99 |
| qwen35 0.8B IQ3_XXS - 3.0625 bpw | tg128  |     80.29 |     78.94 |      0.98 |
| qwen35 0.8B IQ4_NL - 4.5 bpw     | pp512  |    728.65 |    742.09 |      1.02 |
| qwen35 0.8B IQ4_NL - 4.5 bpw     | tg128  |     82.39 |     84.46 |      1.03 |
| qwen35 0.8B IQ4_XS - 4.25 bpw    | pp512  |    681.33 |    677.06 |      0.99 |
| qwen35 0.8B IQ4_XS - 4.25 bpw    | tg128  |     80.18 |     79.28 |      0.99 |
| qwen35 0.8B Q2_K_M               | pp512  |    413.28 |    415.94 |      1.01 |
| qwen35 0.8B Q2_K_M               | tg128  |     81.90 |     82.78 |      1.01 |
| qwen35 0.8B Q3_K_M               | pp512  |    493.17 |    495.08 |      1.00 |
| qwen35 0.8B Q3_K_M               | tg128  |     82.75 |     83.23 |      1.01 |
| qwen35 0.8B Q3_K_S               | pp512  |    429.35 |    427.64 |      1.00 |
| qwen35 0.8B Q3_K_S               | tg128  |     86.69 |     87.02 |      1.00 |
| qwen35 0.8B Q4_0                 | pp512  |    783.46 |    782.32 |      1.00 |
| qwen35 0.8B Q4_0                 | tg128  |     88.23 |     87.90 |      1.00 |
| qwen35 0.8B Q4_1                 | pp512  |    741.71 |    729.76 |      0.98 |
| qwen35 0.8B Q4_1                 | tg128  |     85.44 |     86.01 |      1.01 |
| qwen35 0.8B Q4_K_M               | pp512  |    676.24 |    681.31 |      1.01 |
| qwen35 0.8B Q4_K_M               | tg128  |     76.59 |     77.06 |      1.01 |
| qwen35 0.8B Q4_K_S               | pp512  |    683.12 |    688.81 |      1.01 |
| qwen35 0.8B Q4_K_S               | tg128  |     80.50 |     81.19 |      1.01 |
| qwen35 0.8B Q5_K_M               | pp512  |    635.33 |    642.11 |      1.01 |
| qwen35 0.8B Q5_K_M               | tg128  |     72.07 |     72.49 |      1.01 |
| qwen35 0.8B Q5_K_S               | pp512  |    660.95 |    658.18 |      1.00 |
| qwen35 0.8B Q5_K_S               | tg128  |     72.19 |     72.95 |      1.01 |
| qwen35 0.8B Q6_K                 | pp512  |    647.97 |    638.84 |      0.99 |
| qwen35 0.8B Q6_K                 | tg128  |     72.83 |     72.49 |      1.00 |
| qwen35 0.8B Q8_0                 | pp512  |    805.01 |    785.49 |      0.98 |
| qwen35 0.8B Q8_0                 | tg128  |     70.10 |     70.13 |      1.00 |

Signed-off-by: Adrien Gallouët <angt@huggingface.co>
…22394)

* fix: create directory and log cache file name.

* Remove GGML_LOG_INFO conditional compilation.

---------

Co-authored-by: kotaro <kotaro.kusunoki@gmail.com>
…rg#22420)

* Additional test for common/gemma4 : handle parsing edge cases

* Move tests to Gemma 4 test group
) (ggml-org#22118)

* This commit enables the router to forward form-data to model server.
Fixes ggml-org#22044 (enabling to use the /v1/audio/transcriptions in router mode)

* * Applied the suggestion from Copilots first comment: using the non-throwing json::parse overload.
* Addressed Copilots third comment by extending the files representation to also include filename and content-type
* Addressed Copilots fourth comment by making the RNG thread_local

* Changed variable body from std::string to std::ostringstream in build_multipart_body
as suggested by ngxson in ggml-org#22118 (comment)

* Added sanitize_field lambda in build_multipart_body for key, filename and content_type
as suggested by ngxson in ggml-org#22118 (comment)

* explicitly checking if value/item is string before calling value/item.get<std::string>()
as requested by ngxson in ggml-org#22118 (comment)

* Added double quote to the sanitize lambda and throw on json parse failure

---------

Co-authored-by: Ralph Paßgang <ralph@trust-it.de>
* add fast matmul matvec q1_0 kernel

* ggml-webgpu: drop redundant zero-fills in Q1_0 shmem init
* spec : refactor params

* cont : fix

* cont : rename "sparam" to "sampling"

* cont : add spec params category

* cont : add info about removed arguments

* cont : skip param length check for spec params

* cont : adapt server tests
New operators:
- GGML_OP_SET: implement via aclnnInplaceCopy on target region
- GGML_OP_CUMSUM: implement via aclnnCumsum
- GGML_OP_FILL: implement via aclnnInplaceFillScalar
- GGML_OP_DIAG: implement via aclnnInplaceCopy on diagonal strides
- GGML_OP_TRI (lower/lower_diag/upper_diag/upper): implement via
  aclnnTril(-1/0) and aclnnTriu(0/1) with appropriate diagonal offsets
- GGML_OP_SOLVE_TRI: implement via aclnnTriangularSolve
- GGML_UNARY_OP_SOFTPLUS: implement via aclnnSoftplus

Optimizations:
- GLU (SwiGLU/GeGLU/GeGLU_ERF/GeGLU_QUICK): fuse with aclnnSwiGlu /
  aclnnGeGluV3 when applicable; fallback conditions now checked inside
  each function rather than at the call site
- CROSS_ENTROPY_LOSS: replace 5-kernel sequence (LogSoftmax→Mul→
  ReduceSum×2→Muls) with single aclnnSoftmaxCrossEntropyWithLogits call
- L2_NORM: fix in-place ClampMin on norm result (was clamping wrong
  tensor); add eps clamping before division to avoid divide-by-zero
- PAD_REFLECT_1D: eliminate per-ne[3] loop; assert contiguity and call
  ReflectionPad1d once on the full 4-D view; remove redundant nb copies
- GET_ROWS: replace IndexSelect with GatherV2 per batch slice; refactor
  helper into gather_batched lambda with batch loop inlined
- SET_ROWS: replace IndexCopy with InplaceIndexCopy per batch slice;
  refactor helper into scatter_batched lambda with batch loop inlined
- OUT_PROD: replace O(ne[3]*ne[2]*ne[1]) Ger+InplaceAdd loop with
  per-slice Matmul loop (src0 @ src1^T); handles strided-broadcast
  batch dims where ne02/ne03 may differ from ne2/ne3
- backend memset_tensor: implement via aclrtMemset (was NULL)

Bug fixes:
- COUNT_EQUAL: use non-inplace EqTensor into a same-type temporary
  buffer instead of InplaceEqTensor, avoiding corruption of src0
- ACL graph cache (USE_ACL_GRAPH): restore node_type and src_type[]
  fields in ggml_graph_node_properties; has_matching_properties() was
  missing type checks, causing F16 and BF16 tensors (same nb[0]=2) to
  incorrectly share cached graphs and produce wrong results (ERR≈679)
- graph cache op_params matching: compare full GGML_MAX_OP_PARAMS
  bytes so that ops differing only in parameters are not incorrectly
  replayed from cache
* ggml : revert to -lm linking instead of find_library

`find_library(MATH_LIBRARY m)` was introduced recently, but it breaks
CUDA compilation with GGML_STATIC. I could not find any valid use case
where we would prefer `find_library` over the standard `-lm` approach.

This commit is also meant to start a discussion if there is a valid
reason to keep `find_library(MATH_LIBRARY m)`, we should clarify what
problem it was solving and find an alternative fix that does not break
CUDA with GGML_STATIC.

Signed-off-by: Adrien Gallouët <angt@huggingface.co>

* ggml : use MATH_LIBRARY only if defined

Signed-off-by: Adrien Gallouët <angt@huggingface.co>

* ggml : fix initial broken condition

Signed-off-by: Adrien Gallouët <angt@huggingface.co>

* ggml : always respect MATH_LIBRARY when defined

Signed-off-by: Adrien Gallouët <angt@huggingface.co>

---------

Signed-off-by: Adrien Gallouët <angt@huggingface.co>
Signed-off-by: Adrien Gallouët <angt@huggingface.co>
…1918)

* ggml: improve SPIR-V headers detection with __has_include while preserving original _WIN32 logic

* Address review comments: fix fallback logic and add FreeBSD support

* Remove spirv_cross fallback as per review

* Remove redundant __has_include check
* wip: server_tools

* feat: Integrate with `/tools` endpoint

* feat: Builtin + MCP + JSON Schema Tools WIP

* refactor

* displayName -> display_name

* snake_case everywhere

* rm redundant field

* feat: Improvements

* chore: update webui build output

* refactor: Updates after server updates

* chore: update webui build output

* change arg to --tools all

* feat: UI improvements

* chore: update webui build output

* add readme mention

* llama-gen-docs

* chore: update webui build output

* chore: update webui build output

* chore: update webui build output

* feat: Reorganize settings sections

* feat: Separate dialogs for MCP Servers Settings and Import/Export

* feat: WIP

* feat: WIP

* feat: WIP

* feat: WIP

* feat: WIP

* feat: WIP

* WIP on allozaur/20677-webui-server-tools

* feat: UI improvements

* chore: Update package lock

* chore: Run `npm audit fix`

* feat: UI WIP

* feat: UI

* refactor: Desktop Icon Strip DRY

* feat: Cleaner rendering and transition for ChatScreen

* feat: UI improvements

* feat: UI improvement

* feat: Remove MCP Server "enable" switch from Tools submenu

* chore: Run `npm audit fix`

* feat: WIP

* feat: Logic improvements

* refactor: Cleanup

* refactor: DRY

* test: Fix Chat Sidebar UI Tests

* chore: Update package lock

* refactor: Cleanup

* feat: Chat Message Action Card with Continue and Permission flow implementations

* feat: Add agentic steering messages, draft messages and improve chat UX

* fix: Search results UI

* test: Fix unit test

* feat: UI/UX improvements

* refactor: Simplify `useToolsPanel` access in components

* feat: Implement Processing Info Context API

* feat: Implement 'Go back to chat' functionality for settings

* feat: Enhance MCP Server management in Chat Form Attachments

* style: Minor UI and branding adjustments

* chore: Update webui static build output

* chore: Formatting, linting & type checks

* feat: Draft messages logic

* feat: UI improvements

* feat: Steering Messages improvements

* refactor: Cleanup

* refactor: Cleanup

* feat: Improve UI

* refactor: Settings navigation hook

* refactor: DRY code

* refactor: DRY ChatMessageUser UI components

* refactor: Desktop Icon Strip DRY

* refactor: Tools & permissions

* fix: Navigation condition

* refactor: Cleanup

* refactor: Cleanup

* refactor: Cleanup

* fix: preserve reasoning_content in agentic flow

---------

Co-authored-by: Xuan Son Nguyen <son@huggingface.co>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Improve DeepSeek V4 conversion hot paths and add generalized converter controls for writer buffering, temp-file copying, and PyTorch thread tuning.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
nisparks and others added 28 commits April 28, 2026 12:42
Harden simulator eviction so cache state cannot exceed the configured slot count even if an unexpected trace shape violates the normal bypass invariant.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Include the implied cache byte footprint in the offline LRU simulator output so trace analysis can compare hit-rate savings against VRAM cost for each slot count.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Reject inconsistent expert_size values for a single simulator cache key so cache footprint and byte-savings reports cannot silently mix incompatible trace records.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Parse used_bytes from GGML_SCHED_MOE_LOG and reject impossible trace records before simulation so copy-byte savings cannot be computed from inconsistent metadata.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Add an experimental scheduler-side MoE expert cache gated by GGML_SCHED_MOE_CACHE_SLOTS. The cache uses persistent backend buffers for expert slots plus per-op remapped ID tensors, and falls back to the existing selective-copy path when disabled or when a request does not fit.

The default path remains unchanged unless the env var is set.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Reject malformed GGML_SCHED_MOE_CACHE_SLOTS values instead of relying on atoi truncation, and leave the experimental cache disabled when the env var is invalid.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Reset copied view offsets and backend-specific extra metadata before allocating persistent MoE cache tensors. This keeps synthetic cache tensors independent from the source tensors they mirror.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Extend the MoE copy LRU simulator with a runtime mode that parses moe_cache log lines from the experimental scheduler cache. The report summarizes actual runtime hit/miss/copy counters and validates basic cache-log accounting.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Clarify the simulator help text now that it supports both moe_copy LRU simulation and moe_cache runtime log summaries.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Recompute selected-expert bitsets when a reused ids tensor is paired with a different expert count. This prevents selective-copy and experimental MoE cache paths from reusing a bitset sized for another expert dimension.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Handle the degenerate case where a MoE ids tensor selects no experts by skipping grouped expert copies instead of walking past n_expert in the selective-copy fallback.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
When GGML_SCHED_MOE_LOG is enabled and GGML_SCHED_MOE_CACHE_SLOTS requests the experimental cache, log why the cache falls back to selective expert copy. This makes later model validation explain cache misses such as too many active experts, unsupported layout, or allocation failures.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Extend the MoE runtime log summary to parse moe_cache_bypass lines and report fallback reasons by aggregate and backend/tensor key. This pairs with GGML_SCHED_MOE_LOG bypass-reason output for cache-enabled validation runs.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Document that runtime log summaries accept moe_cache, moe_cache_bypass, or mixed logs, and add coverage for bypass-only traces so early cache validation runs remain parseable.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Include the requested cache slot count in runtime bypass summaries so combined cache-enabled logs from multiple slot settings do not collapse distinct fallback behavior into one reason bucket.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Keep runtime cache success summaries separated by requested slot count, matching bypass summaries and allowing combined logs from multiple GGML_SCHED_MOE_CACHE_SLOTS runs to be analyzed together.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Move the scheduler-side MoE LRU cache hardening and multi-GPU placement experiments off the PR branch. This includes split-node scanning for selective expert copies, type-aligned cache slot strides for MXFP4 correctness, host-offload placement based on non-weight source tensors, and tolerant parsing of raw runtime logs with invalid UTF-8 bytes.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Add count-aware MoE copy trace logging and extend the offline LRU simulator with prompt, frequency, Markov, set-Markov, and oracle prefetch policies. Include repeat and prefetch-budget controls so focused coding-agent traces can be evaluated before runtime prefetch changes.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Add an env-gated runtime set-Markov policy for the experimental MoE expert cache. The default setmarkov mode is retention-only and uses learned expert-set transitions to avoid evicting likely-next resident experts; a positive GGML_SCHED_MOE_CACHE_PREFETCH_LIMIT also enables bounded speculative copies for diagnostics.

The measured runtime result is neutral to negative versus demand LRU, so copy prefetch remains disabled by default and the prototype is kept as an experimental diagnostic path.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Add an env-gated prompt/batch priming mode for the experimental scheduler-side MoE cache. When GGML_SCHED_MOE_CACHE_PRIME=last and a prompt-side MUL_MAT_ID touches more experts than the cache can execute from, seed the persistent cache with the last routed experts while still falling back to the normal authoritative selective-copy path.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Use two output rows per CUDA block for one-token non-small-K IQ4_XS MMVQ, matching the existing F8 row-block optimization. This improves the fast DeepSeek4 IQ4_XS-expertQ3_K route on the dual 3090 setup while leaving small-K and other quant types unchanged.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
IQ4_XS MMVQ only consumes the Q8_1 scale value, so route IQ4_XS activation quantization through the existing no-sum Q8_1 kernel used by F8, MXFP4, and NVFP4. This trims the Q8 activation quantization bucket on the fast DeepSeek4 IQ route without changing other IQ types.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Add experimental native FP4/FP8 CUDA tuning, DeepSeek V4 prompt-cache restore handling, live reasoning streaming support, and a DeepSeek V4 chat template validated against the shipped encoder fixtures.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
When non-standard attention architectures (e.g. DeepSeek V4 Flash CSA+HCA)
are used, llama_memory_seq_pos_min() may return -1 even with n_past > 0.
The custom KV cache layout is incompatible with standard prompt cache
restoration, causing a hard crash.

Replace GGML_ABORT with graceful fallback: set n_past=0 and pos_next=0
to force full prompt re-evaluation, same as the SWA/hybrid memory path.

Verified: 60+ concurrent requests on 8xA100, zero crashes.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
@socket-security
Copy link
Copy Markdown

Review the following changes in direct dependencies. Learn more about Socket for GitHub.

Diff Package Supply Chain
Security
Vulnerability Quality Maintenance License
Addedpypi/​numpy@​1.26.47510010010070
Updatednpm/​@​sveltejs/​kit@​2.50.2 ⏵ 2.57.199100 +2181 +198100
Updatednpm/​vite@​7.2.2 ⏵ 7.3.296100 +2382 +198100
Updatednpm/​svelte@​5.48.3 ⏵ 5.55.188 +1100 +1188 +197 -1100
Updatednpm/​storybook@​10.2.4 ⏵ 10.3.399100 +1688 +1100100
Updatednpm/​bits-ui@​2.15.5 ⏵ 2.17.3100 +110091 +195100

View full report

@xczhanjun xczhanjun closed this May 2, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.