scheduler: preserve prompt checkpoints in chunked prefill resume path#221
Conversation
|
Critical review found a blocking scheduler issue: the chunked-prefill path still does not invoke |
|
This is the right approach. We had #194 open for the same crash (one-line unpack fix) but just closed it -- your PR is the only one of the four submissions that actually preserves checkpoint semantics through the chunked prefill lifecycle instead of silently discarding them. The checkpoint-aware finalization step (feeding remaining checkpoint tokens through the model before generation) is what the others all miss. Happy to test once the callback gap is resolved. We run chunked prefill on Qwen3.5-122B (M2 Ultra 128GB) with 2048-token prefill steps. |
Thump604
left a comment
There was a problem hiding this comment.
The PR correctly addresses the callback gap that was flagged. The fix preserves prompt_checkpoints through partial resume and invokes the callback after finalization, which the prior submissions missed. The semantic changes to checkpoint-aware bounds checking are sound.
However, I have three concrete concerns before this merges:
-
Test coverage is incomplete. test_chunked_prefill_accepts_prompt_checkpoints constructs a 7-tuple but never exercises the actual zip(*batch_prompts) unpack on line 414. test_chunked_prefill_invokes_checkpoint_callback doesn't validate the "checkpoint tail" model execution (lines 313-318), which is the most complex part of the fix. A test with checkpoint > 1 initially (not just in resume) would strengthen confidence.
-
Line 425 has a suspicious defensive formula:
(ln - pc if pc > 0 else -pc). Checkpoint offsets should always be positive. Why the negation when pc <= 0? This deserves a comment explaining the intent — is this dead code, or does mlx-lm ever produce zero/negative checkpoints? -
The checkpoint tail replay (lines 313-318) feeds tokens 0..checkpoint-1 through the model after finalize(). Is this the intended semantics? The comment on line 303 says "once only the checkpoint tail remains" but then we immediately replay it. If the cache already saw these tokens during chunked processing, feeding them again seems redundant. I need clarification: does finalize() clear cache state, or just mark it finalized? If it clears, the replay is necessary; if not, it may be redundant computation.
The implementation is not blocking — all three can be addressed post-merge. But they should be addressed before production use on 122B at 2048-token prefill steps. Otherwise this is good work fixing a real checkpoint contract violation.
|
The PR correctly addresses the callback gap that was flagged. The fix preserves prompt_checkpoints through partial resume and invokes the callback after finalization, which the prior submissions missed. The semantic changes to checkpoint-aware bounds checking are sound. However, I have three concrete concerns before this merges:
The implementation is not blocking. All three can be addressed post-merge. But they should be resolved before production use on 122B at 2048-token prefill steps. Otherwise this is good work fixing a real checkpoint contract violation that prior submissions missed. |
|
@waybarrios, @krystophny: status note. PR currently shows CONFLICTING merge status. Likely needs a rebase on current main since the chunked prefill code path has had other changes since this branch was created. The fix itself addresses a real upstream contract: recent mlx-lm prompt tuples carry a seventh This PR is likely the right fix for issue #155 (omprashantjain, "Chunked prefill crashes with mlx-lm >= 0.31.0") which describes the symptom of the same underlying mlx-lm format change. |
The upstream BatchGenerator contract requires prompt_checkpoint_callback to fire after cache finalization, before the checkpoint tail model call. The chunked-prefill monkeypatch preserved the checkpoint field but never invoked the callback, breaking the upstream checkpoint contract. Wire _lazy_extract_cache from mlx-lm and invoke the callback at the correct semantic boundary. Add regression test verifying the callback fires with the correct uid and checkpoint offset.
f0a3b47 to
a1d566b
Compare
|
Force-pushed a refresh onto current upstream |
2ce37be to
b80073a
Compare
Thump604
left a comment
There was a problem hiding this comment.
Refresh confirmed on head b80073a against upstream main b4fa030. The three concrete review points from the prior COMMENTED review are addressed:
- Inline clarification for the upstream checkpoint normalization is visible in vllm_mlx/scheduler.py.
- Inline comment on the post-finalize() checkpoint-tail replay path is visible.
- tests/test_batching.py gains +358 lines of regression coverage, including the prompt_checkpoint > 1 tail replay path that the prior test construction never exercised.
The chunked prefill path now invokes prompt_checkpoint_callback after finalization, which was the blocking issue flagged on the earlier revision. Checkpoint semantics through partial resume match the upstream contract: recent mlx-lm prompt tuples carry a seventh prompt_checkpoints field that the chunked-prefill monkeypatch now retains across partial resume.
CI green across lint, type-check, test-matrix 3.10-3.12, test-apple-silicon, tests. Approved. This is the right fix for issue #155 ("Chunked prefill crashes").
Summary
Preserve
prompt_checkpointsacross chunked-prefill partial resume and finalization formlx-lmprompt tuples, and invoke the upstreamprompt_checkpoint_callbackcontract.Why
Recent
mlx-lmprompt tuples carry a seventhprompt_checkpointsfield. Upstream one-line fixes cover the immediate unpack crash in_chunked_next, but the chunked-prefill monkeypatch also needs to retain that field when it stores partial progress and resumes generation later.Additionally, the upstream
BatchGeneratorinvokesprompt_checkpoint_callbackafter cache finalization. The chunked-prefill monkeypatch was missing this callback invocation, breaking the checkpoint contract.What changed
_lazy_extract_cachefrom mlx-lm and invokeprompt_checkpoint_callbackat the correct semantic boundary (afterc.finalize(), before the checkpoint tail model call)prompt_checkpoint > 1tail replayStatus
main(b4fa030) on 2026-04-09prompt_checkpointnormalization and the post-finalize()checkpoint-tail replayprompt_checkpoint > 1tail replay path the review called outFiles to review
vllm_mlx/scheduler.pytests/test_batching.pyValidation
python -m pytest tests/test_batching.py::TestSchedulerBasic -q->10 passed