Skip to content

[0.13.0][cherry-pick][bugfix](cp) align max_context_chunk to cp_virtual_block_size#5782

Merged
wangxiyuan merged 1 commit intovllm-project:releases/v0.13.0from
pisceskkk:bugfix-r0.13.0
Jan 12, 2026
Merged

[0.13.0][cherry-pick][bugfix](cp) align max_context_chunk to cp_virtual_block_size#5782
wangxiyuan merged 1 commit intovllm-project:releases/v0.13.0from
pisceskkk:bugfix-r0.13.0

Conversation

@pisceskkk
Copy link
Copy Markdown
Contributor

@pisceskkk pisceskkk commented Jan 12, 2026

What this PR does / why we need it?

In the chunked prefill scenario, CP needs to align the max_context_chunk to the cp_virtual_block_size, but the current implementation only aligns it to the block_size. For PD-disaggregation, cp_kv_cache_interleave_size is typically set equal to block_size, in which case cp_virtual_block_size=block_size * dcp_size * pcp_size. Under specific conditions, this can lead to misalignment of certain chunks, subsequently triggering assertion check errors.
ref. #5767

Does this PR introduce any user-facing change?

No

Signed-off-by: QiuChunshuo <qiuchunshuo@huawei.com>
@github-actions
Copy link
Copy Markdown
Contributor

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly fixes a bug in the chunked prefill scenario with context parallelism by ensuring max_context_chunk is properly aligned. The approach of updating self.block_size to be the least common multiple of its original value and cp_virtual_block_size is sound. I've added one suggestion to improve robustness by adding an assertion to prevent a potential division-by-zero error in case of a misconfiguration. Overall, this is a good fix.

Comment thread vllm_ascend/attention/context_parallel/mla_cp.py
@pisceskkk pisceskkk changed the base branch from main to releases/v0.13.0 January 12, 2026 06:05
@weiguihua2 weiguihua2 added ready read for review ready-for-test start test by label for PR labels Jan 12, 2026
@wangxiyuan wangxiyuan changed the title [bugfix](cp) align max_context_chunk to cp_virtual_block_size [0.13.0][cherry-pick][bugfix](cp) align max_context_chunk to cp_virtual_block_size Jan 12, 2026
@wangxiyuan wangxiyuan merged commit 15b7cc2 into vllm-project:releases/v0.13.0 Jan 12, 2026
21 of 22 checks passed
@pisceskkk pisceskkk deleted the bugfix-r0.13.0 branch January 13, 2026 03:47
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready read for review ready-for-test start test by label for PR

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants