Skip to content

[bugfix](CP) Fix and unify the PD request discrimination logic.#5939

Merged
wangxiyuan merged 1 commit intovllm-project:mainfrom
pisceskkk:fix_main2main
Jan 31, 2026
Merged

[bugfix](CP) Fix and unify the PD request discrimination logic.#5939
wangxiyuan merged 1 commit intovllm-project:mainfrom
pisceskkk:fix_main2main

Conversation

@pisceskkk
Copy link
Copy Markdown
Contributor

@pisceskkk pisceskkk commented Jan 15, 2026

What this PR does / why we need it?

Since the PR (vllm-project/vllm#32118) has modified the criteria for judging Prefill and Decode requests in vLLM, PCPManager needs to synchronize with this standard. As PCPManager involves multiple calculations of PD request counts, this PR attempts to consolidate the related logic and update the PD request count once per batch.

How was this patch tested?

pytest tests/e2e/multicard/4-cards/long_sequence/test_mtp.py

@github-actions
Copy link
Copy Markdown
Contributor

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

@pisceskkk pisceskkk force-pushed the fix_main2main branch 2 times, most recently from fadf7ef to 253ce21 Compare January 16, 2026 07:27
if query_lens_pcp_full is None else query_lens_pcp_full
is_prefill = query_lens > decode_threshold
num_computed_tokens = common_attn_metadata.num_computed_tokens_cpu
is_prefill = (query_lens > decode_threshold) | (num_computed_tokens == 0)
Copy link
Copy Markdown
Collaborator

@wangxiyuan wangxiyuan Jan 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@weiguihua2 weiguihua2 added ready read for review ready-for-test start test by label for PR labels Jan 16, 2026
@github-actions
Copy link
Copy Markdown
Contributor

This pull request has conflicts, please resolve those before we can evaluate the pull request.

@github-actions
Copy link
Copy Markdown
Contributor

This pull request has conflicts, please resolve those before we can evaluate the pull request.

@pisceskkk pisceskkk force-pushed the fix_main2main branch 8 times, most recently from 2c7dbb3 to 85491d8 Compare January 30, 2026 07:37
Signed-off-by: QiuChunshuo <qiuchunshuo@huawei.com>
@wangxiyuan wangxiyuan merged commit 638cae8 into vllm-project:main Jan 31, 2026
26 checks passed
chenchuw886 pushed a commit to chenchuw886/vllm-ascend that referenced this pull request Feb 12, 2026
…-project#5939)

### What this PR does / why we need it?
Since the PR (vllm-project/vllm#32118) has
modified the criteria for judging Prefill and Decode requests in vLLM,
PCPManager needs to synchronize with this standard. As PCPManager
involves multiple calculations of PD request counts, this PR attempts to
consolidate the related logic and update the PD request count once per
batch.

### How was this patch tested?
```bash
pytest tests/e2e/multicard/4-cards/long_sequence/test_mtp.py
```

- vLLM version: v0.13.0
- vLLM main:
vllm-project/vllm@11b6af5

Signed-off-by: QiuChunshuo <qiuchunshuo@huawei.com>
Signed-off-by: momochenchuw <chenchuw@huawei.com>
@wangxiyuan wangxiyuan mentioned this pull request Feb 24, 2026
ZRJ026 pushed a commit to ZRJ026/vllm-ascend that referenced this pull request Feb 28, 2026
…-project#5939)

### What this PR does / why we need it?
Since the PR (vllm-project/vllm#32118) has
modified the criteria for judging Prefill and Decode requests in vLLM,
PCPManager needs to synchronize with this standard. As PCPManager
involves multiple calculations of PD request counts, this PR attempts to
consolidate the related logic and update the PD request count once per
batch.

### How was this patch tested?
```bash
pytest tests/e2e/multicard/4-cards/long_sequence/test_mtp.py
```

- vLLM version: v0.13.0
- vLLM main:
vllm-project/vllm@11b6af5

Signed-off-by: QiuChunshuo <qiuchunshuo@huawei.com>
Signed-off-by: zrj026 <zhangrunjiang026@gmail.com>
maoxx241 pushed a commit to maoxx241/vllm-ascend that referenced this pull request Mar 2, 2026
…-project#5939)

### What this PR does / why we need it?
Since the PR (vllm-project/vllm#32118) has
modified the criteria for judging Prefill and Decode requests in vLLM,
PCPManager needs to synchronize with this standard. As PCPManager
involves multiple calculations of PD request counts, this PR attempts to
consolidate the related logic and update the PD request count once per
batch.

### How was this patch tested?
```bash
pytest tests/e2e/multicard/4-cards/long_sequence/test_mtp.py
```

- vLLM version: v0.13.0
- vLLM main:
vllm-project/vllm@11b6af5

Signed-off-by: QiuChunshuo <qiuchunshuo@huawei.com>
ZRJ026 pushed a commit to ZRJ026/vllm-ascend that referenced this pull request Mar 4, 2026
…-project#5939)

### What this PR does / why we need it?
Since the PR (vllm-project/vllm#32118) has
modified the criteria for judging Prefill and Decode requests in vLLM,
PCPManager needs to synchronize with this standard. As PCPManager
involves multiple calculations of PD request counts, this PR attempts to
consolidate the related logic and update the PD request count once per
batch.

### How was this patch tested?
```bash
pytest tests/e2e/multicard/4-cards/long_sequence/test_mtp.py
```

- vLLM version: v0.13.0
- vLLM main:
vllm-project/vllm@11b6af5

Signed-off-by: QiuChunshuo <qiuchunshuo@huawei.com>
Signed-off-by: zrj026 <zhangrunjiang026@gmail.com>
LCAIZJ pushed a commit to LCAIZJ/vllm-ascend that referenced this pull request Mar 7, 2026
…-project#5939)

### What this PR does / why we need it?
Since the PR (vllm-project/vllm#32118) has
modified the criteria for judging Prefill and Decode requests in vLLM,
PCPManager needs to synchronize with this standard. As PCPManager
involves multiple calculations of PD request counts, this PR attempts to
consolidate the related logic and update the PD request count once per
batch.

### How was this patch tested?
```bash
pytest tests/e2e/multicard/4-cards/long_sequence/test_mtp.py
```

- vLLM version: v0.13.0
- vLLM main:
vllm-project/vllm@11b6af5

Signed-off-by: QiuChunshuo <qiuchunshuo@huawei.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

module:tests ready read for review ready-for-test start test by label for PR

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants