Skip to content

[Bugfix] Assertion error when decode prefix cache fully hits#7236

Merged
jianzs merged 3 commits intovllm-project:mainfrom
LCAIZJ:bugfix_connector
Mar 17, 2026
Merged

[Bugfix] Assertion error when decode prefix cache fully hits#7236
jianzs merged 3 commits intovllm-project:mainfrom
LCAIZJ:bugfix_connector

Conversation

@LCAIZJ
Copy link
Copy Markdown
Collaborator

@LCAIZJ LCAIZJ commented Mar 13, 2026

What this PR does / why we need it?

Problem

When decode node enables prefix cache and the local prefix cache fully hits, the following assertion error occurs:

(EngineCore_DP3 pid=34912)   File "/usr/local/python3.11.14/lib/python3.11/site-packages/vllm/v1/engine/core.py", line 520, in step_with_batch_queue
(EngineCore_DP3 pid=34912)     engine_core_outputs = self.scheduler.update_from_output(
(EngineCore_DP3 pid=34912)                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP3 pid=34912)   File "/usr/local/python3.11.14/lib/python3.11/site-packages/vllm/v1/core/sched/scheduler.py", line 1520, in update_from_output
(EngineCore_DP3 pid=34912)     self._update_from_kv_xfer_finished(kv_connector_output)
(EngineCore_DP3 pid=34912)   File "/usr/local/python3.11.14/lib/python3.11/site-packages/vllm/v1/core/sched/scheduler.py", line 2120, in _update_from_kv_xfer_finished
(EngineCore_DP3 pid=34912)     assert RequestStatus.is_finished(req.status)
(EngineCore_DP3 pid=34912)            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP3 pid=34912) AssertionError

The error is triggered in scheduler.py at _update_from_kv_xfer_finished:

  if req.status == RequestStatus.WAITING_FOR_REMOTE_KVS:
      self.finished_recving_kv_req_ids.add(req_id)
  else:
      assert RequestStatus.is_finished(req.status)

Root Cause

When decode node has prefix cache enabled and local prefix cache fully hits:

  1. get_num_new_matched_tokens returns ext_tokens=0, load_kv_async=False when decode prefix cache fully hits
  2. Request status becomes RUNNING (not WAITING_FOR_REMOTE_KVS)
  3. However, update_state_after_alloc still adds the request to _reqs_need_recv because remote_block_ids exists in kv_transfer_params
  4. Worker processes the request in _handle_request:
    - _transfer_kv_cache returns immediately (no actual transfer, local_block_ids is empty)
    - finally block still calls update_done_task_count(request_id)
  5. finished_recving contains this request
  6. When _update_from_kv_xfer_finished processes finished_recving, request status is RUNNING
  7. Assertion fails

Solution

In _handle_request, only notify scheduler (update_done_task_count) when actual KV transfer happened (local_block_ids is not empty). The signals to notify Prefill to release KVCache (_send_done_signal_to_free_remote_port and _send_done_recv_signal) are still sent regardless.

Does this PR introduce any user-facing change?

How was this patch tested?

Signed-off-by: LCAIZJ <leichao139636@163.com>
@LCAIZJ LCAIZJ requested a review from MengqingCao as a code owner March 13, 2026 14:59
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a critical assertion error that manifested in distributed environments when the prefix cache fully hit during decoding. The fix ensures that the scheduler's task tracker is only updated for KV transfer completion when an actual transfer of KV blocks has occurred, preventing incorrect state assertions and improving the robustness of the distributed KV cache management.

Highlights

  • Fix Assertion Error: Resolved an AssertionError that occurred when the decode node's prefix cache fully hit, specifically in _update_from_kv_xfer_finished, due to the scheduler being notified of KV transfer completion even when no actual transfer happened.
  • Conditional Task Tracker Update: Modified the _handle_request method to ensure that the task_tracker.update_done_task_count is only called when actual KV block transfers (local_block_ids are present) have taken place, preventing incorrect state assertions.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • vllm_ascend/distributed/kv_transfer/kv_p2p/mooncake_connector.py
    • Modified the _handle_request method to conditionally update the task tracker only when local_block_ids are present, indicating an actual KV transfer.
Activity
  • No human activity has been recorded on this pull request yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@LCAIZJ LCAIZJ requested a review from jianzs March 13, 2026 14:59
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses a critical bug that causes an AssertionError when a decode request with prefix caching results in a full cache hit. The root cause is well-identified: a task completion is incorrectly reported even when no KV transfer occurs, leading to an incorrect state in the scheduler. The proposed fix is to conditionally report task completion only when local_block_ids is non-empty, ensuring that the scheduler's state remains consistent. The change is correct and effectively resolves the issue.

As per the repository's style guide, I suggest updating the pull request title and summary for clarity and consistency:

Suggested PR Title:

[KVTransfer][BugFix] Fix assertion error on full prefix cache hit

Suggested PR Summary:

### What this PR does / why we need it?

This PR fixes an `AssertionError` that occurs in the scheduler when a decode request with prefix caching has a full cache hit.

#### Problem
When a decode node enables prefix cache and the local prefix cache fully hits, an `AssertionError` is triggered in `scheduler.py` at `_update_from_kv_xfer_finished` because the request status is `RUNNING` instead of a finished status.

#### Root Cause
When a decode prefix cache fully hits:
1. `get_num_new_matched_tokens` returns `ext_tokens=0` and `load_kv_async=False`.
2. The request status becomes `RUNNING`.
3. `update_state_after_alloc` still adds the request to `_reqs_need_recv`.
4. In the worker's `_handle_request`, `_transfer_kv_cache` returns immediately as there are no local blocks to transfer.
5. However, the `finally` block unconditionally calls `task_tracker.update_done_task_count(request_id)`, incorrectly signaling that a KV transfer has finished.
6. This leads to the scheduler processing a request in `RUNNING` state as if it were finished, causing the assertion to fail.

#### Solution
The solution is to only call `update_done_task_count` in `_handle_request` when an actual KV transfer has occurred. This is determined by checking if `local_block_ids` is non-empty.

### Does this PR introduce _any_ user-facing change?
No. This PR fixes a bug that causes a crash, improving stability. There are no user-facing API or behavior changes.

### How was this patch tested?
CI will be run to ensure existing tests pass. Manual testing was performed to reproduce the bug and verify that this patch resolves the assertion error under the described conditions.

@github-actions
Copy link
Copy Markdown
Contributor

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

@LCAIZJ LCAIZJ changed the title [WIP][Bugfix] Assertion error when decode prefix cache fully hits [Bugfix] Assertion error when decode prefix cache fully hits Mar 17, 2026
@jianzs jianzs enabled auto-merge (squash) March 17, 2026 15:17
@jianzs jianzs added ready read for review ready-for-test start test by label for PR labels Mar 17, 2026
@jianzs jianzs merged commit d9ac7e8 into vllm-project:main Mar 17, 2026
62 of 63 checks passed
845473182 pushed a commit to 845473182/vllm-ascend that referenced this pull request Mar 18, 2026
…scend into qwen3next_graph

* 'qwen3next_graph' of https://github.com/845473182/vllm-ascend: (62 commits)
  [doc] Refresh the documentation for DeepSeek-V3.2 (vllm-project#7403)
  [bugfix][accuracy] Fix ds indexer accuracy problem caused by k rope (vllm-project#7341)
  [P/D] LayerwiseConnector supports the virtual push functionality on node D. (vllm-project#7361)
  [CI] Add PAT_TOKEN when checkout (vllm-project#7400)
  [main2main] upgrade vllm to 0308 (vllm-project#7213)
  [CI] add scheduled stale issue management (vllm-project#7354)
  [CI] expand issue labeler rules for feature/model triage (vllm-project#7356)
  [Bugfix] Assertion error when decode prefix cache fully hits (vllm-project#7236)
  [doc] Refresh the documentation for GLM-4.7 (vllm-project#7292)
  [BugFix]A2 MOE method&& layerwise MTP bugfix && Mamba gdn_metadata bugfix (vllm-project#7364)
  [doc] Upload doc for qwen3.5-27B and qwen3.5-397B-A17B on Ascend (vllm-project#7313)
  [bugfix]Enable dispatch_ffn_combine feature for qwen3.5 (vllm-project#7066)
  [bugfix] fix unzip file path for fia operator (vllm-project#7367)
  [Perf] Optimize bias handling in AscendRMSNorm (vllm-project#7226)
  [eagle3][pcp] fix bug for eagle3 and cp enable (vllm-project#7309)
  [Bugfix] fix TransposeKvCacheByBlock op error report in plog (vllm-project#7235)
  [Feature]Supports DSv3.1 PD separation and C8 quantization (vllm-project#7222)
  [main][bugfix] Fixed the problem that eagle3 will crash in FULL_DECODE_ONLY (vllm-project#7290)
  [xlite][Bugfix] Support mrope and deepstack features in xlite backend (vllm-project#7295)
  [model_runner_v2]optimize the performance of the _topk_log_softmax_kernel (vllm-project#7221)
  ...
starmountain1997 pushed a commit to starmountain1997/vllm-ascend that referenced this pull request Mar 25, 2026
…oject#7236)

### What this PR does / why we need it?
#### Problem
When decode node enables prefix cache and the local prefix cache fully
hits, the following assertion error occurs:
```
(EngineCore_DP3 pid=34912)   File "/usr/local/python3.11.14/lib/python3.11/site-packages/vllm/v1/engine/core.py", line 520, in step_with_batch_queue
(EngineCore_DP3 pid=34912)     engine_core_outputs = self.scheduler.update_from_output(
(EngineCore_DP3 pid=34912)                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP3 pid=34912)   File "/usr/local/python3.11.14/lib/python3.11/site-packages/vllm/v1/core/sched/scheduler.py", line 1520, in update_from_output
(EngineCore_DP3 pid=34912)     self._update_from_kv_xfer_finished(kv_connector_output)
(EngineCore_DP3 pid=34912)   File "/usr/local/python3.11.14/lib/python3.11/site-packages/vllm/v1/core/sched/scheduler.py", line 2120, in _update_from_kv_xfer_finished
(EngineCore_DP3 pid=34912)     assert RequestStatus.is_finished(req.status)
(EngineCore_DP3 pid=34912)            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP3 pid=34912) AssertionError
```

The error is triggered in scheduler.py at _update_from_kv_xfer_finished:
```
  if req.status == RequestStatus.WAITING_FOR_REMOTE_KVS:
      self.finished_recving_kv_req_ids.add(req_id)
  else:
      assert RequestStatus.is_finished(req.status)
```

  #### Root Cause

When decode node has prefix cache enabled and local prefix cache fully
hits:

1. get_num_new_matched_tokens returns ext_tokens=0, load_kv_async=False
when decode prefix cache fully hits
  2. Request status becomes RUNNING (not WAITING_FOR_REMOTE_KVS)
3. However, update_state_after_alloc still adds the request to
_reqs_need_recv because remote_block_ids exists in kv_transfer_params
  4. Worker processes the request in _handle_request:
- _transfer_kv_cache returns immediately (no actual transfer,
local_block_ids is empty)
    - finally block still calls update_done_task_count(request_id)
  5. finished_recving contains this request
6. When _update_from_kv_xfer_finished processes finished_recving,
request status is RUNNING
  7. Assertion fails

  #### Solution

In _handle_request, only notify scheduler (update_done_task_count) when
actual KV transfer happened (local_block_ids is not empty). The signals
to notify Prefill to release KVCache
(_send_done_signal_to_free_remote_port and _send_done_recv_signal) are
still sent regardless.

### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

- vLLM version: v0.17.0
- vLLM main:
vllm-project/vllm@4034c3d

Signed-off-by: LCAIZJ <leichao139636@163.com>
lihaokun-2026 pushed a commit to lihaokun-2026/vllm-ascend that referenced this pull request Mar 29, 2026
…oject#7236)

### What this PR does / why we need it?
#### Problem
When decode node enables prefix cache and the local prefix cache fully
hits, the following assertion error occurs:
```
(EngineCore_DP3 pid=34912)   File "/usr/local/python3.11.14/lib/python3.11/site-packages/vllm/v1/engine/core.py", line 520, in step_with_batch_queue
(EngineCore_DP3 pid=34912)     engine_core_outputs = self.scheduler.update_from_output(
(EngineCore_DP3 pid=34912)                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP3 pid=34912)   File "/usr/local/python3.11.14/lib/python3.11/site-packages/vllm/v1/core/sched/scheduler.py", line 1520, in update_from_output
(EngineCore_DP3 pid=34912)     self._update_from_kv_xfer_finished(kv_connector_output)
(EngineCore_DP3 pid=34912)   File "/usr/local/python3.11.14/lib/python3.11/site-packages/vllm/v1/core/sched/scheduler.py", line 2120, in _update_from_kv_xfer_finished
(EngineCore_DP3 pid=34912)     assert RequestStatus.is_finished(req.status)
(EngineCore_DP3 pid=34912)            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP3 pid=34912) AssertionError
```

The error is triggered in scheduler.py at _update_from_kv_xfer_finished:
```
  if req.status == RequestStatus.WAITING_FOR_REMOTE_KVS:
      self.finished_recving_kv_req_ids.add(req_id)
  else:
      assert RequestStatus.is_finished(req.status)
```

  #### Root Cause

When decode node has prefix cache enabled and local prefix cache fully
hits:

1. get_num_new_matched_tokens returns ext_tokens=0, load_kv_async=False
when decode prefix cache fully hits
  2. Request status becomes RUNNING (not WAITING_FOR_REMOTE_KVS)
3. However, update_state_after_alloc still adds the request to
_reqs_need_recv because remote_block_ids exists in kv_transfer_params
  4. Worker processes the request in _handle_request:
- _transfer_kv_cache returns immediately (no actual transfer,
local_block_ids is empty)
    - finally block still calls update_done_task_count(request_id)
  5. finished_recving contains this request
6. When _update_from_kv_xfer_finished processes finished_recving,
request status is RUNNING
  7. Assertion fails

  #### Solution

In _handle_request, only notify scheduler (update_done_task_count) when
actual KV transfer happened (local_block_ids is not empty). The signals
to notify Prefill to release KVCache
(_send_done_signal_to_free_remote_port and _send_done_recv_signal) are
still sent regardless.

### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

- vLLM version: v0.17.0
- vLLM main:
vllm-project/vllm@4034c3d

Signed-off-by: LCAIZJ <leichao139636@163.com>
@LCAIZJ LCAIZJ deleted the bugfix_connector branch March 30, 2026 01:55
chenchuw886 pushed a commit to chenchuw886/vllm-ascend that referenced this pull request Apr 1, 2026
…oject#7236)

### What this PR does / why we need it?
#### Problem
When decode node enables prefix cache and the local prefix cache fully
hits, the following assertion error occurs:
```
(EngineCore_DP3 pid=34912)   File "/usr/local/python3.11.14/lib/python3.11/site-packages/vllm/v1/engine/core.py", line 520, in step_with_batch_queue
(EngineCore_DP3 pid=34912)     engine_core_outputs = self.scheduler.update_from_output(
(EngineCore_DP3 pid=34912)                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP3 pid=34912)   File "/usr/local/python3.11.14/lib/python3.11/site-packages/vllm/v1/core/sched/scheduler.py", line 1520, in update_from_output
(EngineCore_DP3 pid=34912)     self._update_from_kv_xfer_finished(kv_connector_output)
(EngineCore_DP3 pid=34912)   File "/usr/local/python3.11.14/lib/python3.11/site-packages/vllm/v1/core/sched/scheduler.py", line 2120, in _update_from_kv_xfer_finished
(EngineCore_DP3 pid=34912)     assert RequestStatus.is_finished(req.status)
(EngineCore_DP3 pid=34912)            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP3 pid=34912) AssertionError
```

The error is triggered in scheduler.py at _update_from_kv_xfer_finished:
```
  if req.status == RequestStatus.WAITING_FOR_REMOTE_KVS:
      self.finished_recving_kv_req_ids.add(req_id)
  else:
      assert RequestStatus.is_finished(req.status)
```

  #### Root Cause

When decode node has prefix cache enabled and local prefix cache fully
hits:

1. get_num_new_matched_tokens returns ext_tokens=0, load_kv_async=False
when decode prefix cache fully hits
  2. Request status becomes RUNNING (not WAITING_FOR_REMOTE_KVS)
3. However, update_state_after_alloc still adds the request to
_reqs_need_recv because remote_block_ids exists in kv_transfer_params
  4. Worker processes the request in _handle_request:
- _transfer_kv_cache returns immediately (no actual transfer,
local_block_ids is empty)
    - finally block still calls update_done_task_count(request_id)
  5. finished_recving contains this request
6. When _update_from_kv_xfer_finished processes finished_recving,
request status is RUNNING
  7. Assertion fails

  #### Solution

In _handle_request, only notify scheduler (update_done_task_count) when
actual KV transfer happened (local_block_ids is not empty). The signals
to notify Prefill to release KVCache
(_send_done_signal_to_free_remote_port and _send_done_recv_signal) are
still sent regardless.

### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

- vLLM version: v0.17.0
- vLLM main:
vllm-project/vllm@4034c3d

Signed-off-by: LCAIZJ <leichao139636@163.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready read for review ready-for-test start test by label for PR

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants