Skip to content

[Bugfix] fix TransposeKvCacheByBlock op error report in plog#7235

Merged
LCAIZJ merged 1 commit intovllm-project:mainfrom
lidenghui1110:fix-plog-error
Mar 17, 2026
Merged

[Bugfix] fix TransposeKvCacheByBlock op error report in plog#7235
LCAIZJ merged 1 commit intovllm-project:mainfrom
lidenghui1110:fix-plog-error

Conversation

@lidenghui1110
Copy link
Copy Markdown
Contributor

@lidenghui1110 lidenghui1110 commented Mar 13, 2026

What this PR does / why we need it?

As issue #7201 reported, there are some TransposeKvCacheByBlock operation related ERRORs in plog when vllm launching, though it doesn't influence the running of vllm, but ERRORs will be very confused in debug, this PR fixed the problem as suggested.

Does this PR introduce any user-facing change?

no.

How was this patch tested?

Signed-off-by: lidenghui <lidenghui1110@gmail.com>
@lidenghui1110 lidenghui1110 requested a review from zzzzwwjj as a code owner March 13, 2026 13:41
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a bug where the TransposeKvCacheByBlock operation was incorrectly reporting errors in the plog during vLLM launch, despite not affecting the runtime functionality. The fix involves updating the operator's input definition to correctly specify the expected data types and formats, thereby eliminating the misleading error messages.

Highlights

  • TransposeKvCacheByBlock Operator Definition: Corrected the data type and format definitions for the blockIDs input of the TransposeKvCacheByBlock operator to resolve erroneous error logs.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • csrc/transpose_kv_cache_by_block/op_host/transpose_kv_cache_by_block_def.cpp
    • Updated the blockIDs input definition to correctly specify DataType and Format as ge::DT_INT64, ge::DT_INT64 and ge::FORMAT_ND, ge::FORMAT_ND respectively.
Activity
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request fixes an error log during vLLM startup related to the TransposeKvCacheByBlock operation. The fix involves changing the operator definition for the blockIDs input. My review includes a suggestion to add a code comment to explain this non-obvious change, improving future maintainability.

In addition, I've updated the PR title and summary to better align with the repository's style guide.

Suggested PR Title:

[Ops][BugFix] Fix TransposeKvCacheByBlock op error report in plog

Suggested PR Summary:

### What this PR does / why we need it?
<!--
- Please clarify what changes you are proposing. The purpose of this section is to outline the changes and how this PR fixes the issue.
If possible, please consider writing useful notes for better and faster reviews in your PR.

- Please clarify why the changes are needed. For instance, the use case and bug description.

- Fixes #7201
-->
This PR addresses an issue where `TransposeKvCacheByBlock` operation-related errors appear in the `plog` during vLLM startup, as reported in #7201. Although these errors do not affect vLLM's runtime, they can be confusing during debugging.

The fix aligns the `blockIDs` input definition with other inputs in the operator by:
- Using `DataType` instead of `DataTypeList`.
- Duplicating the type and format specifiers to match the count used by other dynamic inputs (`KCache`, `VCache`).

This resolves the error messages logged at startup.

### Does this PR introduce _any_ user-facing change?
<!--
Note that it means *any* user-facing change including all aspects such as API, interface or other behavior changes.
Documentation-only updates are not considered user-facing changes.
-->
No.

### How was this patch tested?
<!--
CI passed with new added/existing test.
If it was tested in a way different from regular unit tests, please clarify how you tested step by step, ideally copy and paste-able, so that other reviewers can test and check, and descendants can verify in the future.
If tests were not added, please describe why they were not added and/or why it was difficult to add.
-->
The change was validated by observing the absence of the `TransposeKvCacheByBlock` error in the `plog` after applying the fix. Existing CI tests continue to pass.

Comment on lines +20 to +22
.DataType({ge::DT_INT64, ge::DT_INT64})
.Format({ge::FORMAT_ND, ge::FORMAT_ND})
.UnknownShapeFormat({ge::FORMAT_ND, ge::FORMAT_ND});
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

This change appears to be a workaround for a framework requirement where all input definitions in an operator must specify the same number of data types and formats, even for REQUIRED inputs that only support a single type. While this fixes the issue, the duplicated type ge::DT_INT64 is not intuitive and could be mistaken for a bug by future developers. To improve maintainability and prevent this fix from being accidentally reverted, please add a comment explaining why this duplication is necessary.

            // NOTE: The number of types and formats must match other dynamic inputs
            // in this op definition, even though this input is required and only supports one type.
            .DataType({ge::DT_INT64, ge::DT_INT64})
            .Format({ge::FORMAT_ND, ge::FORMAT_ND})
            .UnknownShapeFormat({ge::FORMAT_ND, ge::FORMAT_ND})

@github-actions
Copy link
Copy Markdown
Contributor

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

@jianzs jianzs added ready read for review ready-for-test start test by label for PR labels Mar 16, 2026
@LCAIZJ LCAIZJ merged commit 4e62a2a into vllm-project:main Mar 17, 2026
65 of 67 checks passed
Nagisa125 pushed a commit to starmountain1997/vllm-ascend that referenced this pull request Mar 17, 2026
…oject#7235)

### What this PR does / why we need it?

As issue vllm-project#7201 reported, there are some TransposeKvCacheByBlock
operation related ERRORs in plog when vllm launching, though it doesn't
influence the running of vllm, but ERRORs will be very confused in
debug, this PR fixed the problem as suggested.

### Does this PR introduce _any_ user-facing change?
no.

### How was this patch tested?

- vLLM version: v0.17.0
- vLLM main:
vllm-project/vllm@4034c3d

Signed-off-by: lidenghui <lidenghui1110@gmail.com>
ichaoren pushed a commit to ichaoren/vllm-ascend that referenced this pull request Mar 17, 2026
…oject#7235)

### What this PR does / why we need it?

As issue vllm-project#7201 reported, there are some TransposeKvCacheByBlock
operation related ERRORs in plog when vllm launching, though it doesn't
influence the running of vllm, but ERRORs will be very confused in
debug, this PR fixed the problem as suggested.

### Does this PR introduce _any_ user-facing change?
no.

### How was this patch tested?

- vLLM version: v0.17.0
- vLLM main:
vllm-project/vllm@4034c3d

Signed-off-by: lidenghui <lidenghui1110@gmail.com>
Signed-off-by: xutianyi <xutianyi5@huawei.com>
845473182 pushed a commit to 845473182/vllm-ascend that referenced this pull request Mar 18, 2026
…scend into qwen3next_graph

* 'qwen3next_graph' of https://github.com/845473182/vllm-ascend: (62 commits)
  [doc] Refresh the documentation for DeepSeek-V3.2 (vllm-project#7403)
  [bugfix][accuracy] Fix ds indexer accuracy problem caused by k rope (vllm-project#7341)
  [P/D] LayerwiseConnector supports the virtual push functionality on node D. (vllm-project#7361)
  [CI] Add PAT_TOKEN when checkout (vllm-project#7400)
  [main2main] upgrade vllm to 0308 (vllm-project#7213)
  [CI] add scheduled stale issue management (vllm-project#7354)
  [CI] expand issue labeler rules for feature/model triage (vllm-project#7356)
  [Bugfix] Assertion error when decode prefix cache fully hits (vllm-project#7236)
  [doc] Refresh the documentation for GLM-4.7 (vllm-project#7292)
  [BugFix]A2 MOE method&& layerwise MTP bugfix && Mamba gdn_metadata bugfix (vllm-project#7364)
  [doc] Upload doc for qwen3.5-27B and qwen3.5-397B-A17B on Ascend (vllm-project#7313)
  [bugfix]Enable dispatch_ffn_combine feature for qwen3.5 (vllm-project#7066)
  [bugfix] fix unzip file path for fia operator (vllm-project#7367)
  [Perf] Optimize bias handling in AscendRMSNorm (vllm-project#7226)
  [eagle3][pcp] fix bug for eagle3 and cp enable (vllm-project#7309)
  [Bugfix] fix TransposeKvCacheByBlock op error report in plog (vllm-project#7235)
  [Feature]Supports DSv3.1 PD separation and C8 quantization (vllm-project#7222)
  [main][bugfix] Fixed the problem that eagle3 will crash in FULL_DECODE_ONLY (vllm-project#7290)
  [xlite][Bugfix] Support mrope and deepstack features in xlite backend (vllm-project#7295)
  [model_runner_v2]optimize the performance of the _topk_log_softmax_kernel (vllm-project#7221)
  ...
MengqingCao added a commit that referenced this pull request Mar 18, 2026
### What this PR does / why we need it?
Documented an issue in the 2-node PD mixed deployment scenario where
inference may hang when concurrency exceeds 8.(GLM5)

Noted that the issue has been fixed in PR:
- #7235 
- #7290.
---------
Signed-off-by: MrZ20 <2609716663@qq.com>
Signed-off-by: Mengqing Cao <cmq0113@163.com>
Co-authored-by: Mengqing Cao <cmq0113@163.com>
starmountain1997 pushed a commit to starmountain1997/vllm-ascend that referenced this pull request Mar 25, 2026
…roject#7436)

### What this PR does / why we need it?
Documented an issue in the 2-node PD mixed deployment scenario where
inference may hang when concurrency exceeds 8.(GLM5)

Noted that the issue has been fixed in PR:
- vllm-project#7235 
- vllm-project#7290.
---------
Signed-off-by: MrZ20 <2609716663@qq.com>
Signed-off-by: Mengqing Cao <cmq0113@163.com>
Co-authored-by: Mengqing Cao <cmq0113@163.com>
lihaokun-2026 pushed a commit to lihaokun-2026/vllm-ascend that referenced this pull request Mar 29, 2026
…roject#7436)

### What this PR does / why we need it?
Documented an issue in the 2-node PD mixed deployment scenario where
inference may hang when concurrency exceeds 8.(GLM5)

Noted that the issue has been fixed in PR:
- vllm-project#7235 
- vllm-project#7290.
---------
Signed-off-by: MrZ20 <2609716663@qq.com>
Signed-off-by: Mengqing Cao <cmq0113@163.com>
Co-authored-by: Mengqing Cao <cmq0113@163.com>
chenchuw886 pushed a commit to chenchuw886/vllm-ascend that referenced this pull request Apr 1, 2026
…oject#7235)

### What this PR does / why we need it?

As issue vllm-project#7201 reported, there are some TransposeKvCacheByBlock
operation related ERRORs in plog when vllm launching, though it doesn't
influence the running of vllm, but ERRORs will be very confused in
debug, this PR fixed the problem as suggested.

### Does this PR introduce _any_ user-facing change?
no.

### How was this patch tested?

- vLLM version: v0.17.0
- vLLM main:
vllm-project/vllm@4034c3d

Signed-off-by: lidenghui <lidenghui1110@gmail.com>
chenchuw886 pushed a commit to chenchuw886/vllm-ascend that referenced this pull request Apr 1, 2026
…roject#7436)

### What this PR does / why we need it?
Documented an issue in the 2-node PD mixed deployment scenario where
inference may hang when concurrency exceeds 8.(GLM5)

Noted that the issue has been fixed in PR:
- vllm-project#7235 
- vllm-project#7290.
---------
Signed-off-by: MrZ20 <2609716663@qq.com>
Signed-off-by: Mengqing Cao <cmq0113@163.com>
Co-authored-by: Mengqing Cao <cmq0113@163.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready read for review ready-for-test start test by label for PR

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants