Skip to content

[BugFix] Handle num_cached_tokens/num_external_computed_tokens for different vllm version#8426

Merged
wangxiyuan merged 1 commit intovllm-project:mainfrom
Potabk:fix_main
Apr 20, 2026
Merged

[BugFix] Handle num_cached_tokens/num_external_computed_tokens for different vllm version#8426
wangxiyuan merged 1 commit intovllm-project:mainfrom
Potabk:fix_main

Conversation

@Potabk
Copy link
Copy Markdown
Collaborator

@Potabk Potabk commented Apr 20, 2026

What this PR does / why we need it?

This fix vllm-project/vllm#37460
This PR introduces version-specific logic to handle num_cached_tokens and num_external_computed_tokens in the scheduler, ensuring compatibility with vLLM 0.19.0 and maintaining legacy support for older versions via prefill_stats.

Does this PR introduce any user-facing change?

How was this patch tested?

Signed-off-by: wangli <wangli858794774@gmail.com>
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request updates the vllm_ascend core components to ensure compatibility across different versions of the vLLM framework. By implementing version-aware checks, the changes ensure that token counting mechanisms and prefill statistics reporting function correctly regardless of the underlying vLLM version, preventing potential regressions in scheduling and engine output processing.

Highlights

  • Version Compatibility: Introduced conditional logic using vllm_version_is to handle differences in token counting and prefill statistics between vLLM versions, specifically targeting v0.19.0.
  • Prefill Statistics: Added support for prefill_stats updates in newer vLLM versions, ensuring compatibility with the updated engine output structure.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@github-actions
Copy link
Copy Markdown
Contributor

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces version-specific logic across several schedulers (recompute_scheduler.py, scheduler_dynamic_batch.py, scheduler_profiling_chunk.py, and patch_balance_schedule.py) to handle num_cached_tokens and num_external_computed_tokens based on the vLLM version. For vLLM 0.19.0, it uses these direct attributes, while for other versions, it maintains legacy support by setting and retrieving prefill_stats. However, the use of exact version matching via vllm_version_is("0.19.0") is highly problematic as it will fail for any patch releases (e.g., 0.19.1) or future minor/major versions, potentially causing regressions or runtime errors when the code reverts to legacy paths on newer vLLM releases.

Suggested PR Title:

[Ops][BugFix] Handle num_cached_tokens/num_external_computed_tokens for different vllm version

Suggested PR Summary:

### What this PR does / why we need it?
This PR introduces version-specific logic to handle `num_cached_tokens` and `num_external_computed_tokens` in the scheduler, ensuring compatibility with vLLM 0.19.0 and maintaining legacy support for older versions via `prefill_stats`.

Fixes https://github.com/vllm-project/vllm/pull/37460

### Does this PR introduce _any_ user-facing change?
No.

### How was this patch tested?
CI passed with existing tests.

# Count the number of prefix cached tokens.
if request.num_cached_tokens < 0:
request.num_cached_tokens = request.num_computed_tokens
if vllm_version_is("0.19.0"):
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The vllm_version_is("0.19.0") check uses exact equality (as defined in vllm_ascend/utils.py). This logic will return False for any subsequent versions (e.g., 0.19.1, 0.20.0), causing the scheduler to revert to the legacy prefill_stats path. If the API changes introduced in 0.19.0 persist in later versions, this will lead to runtime errors. Consider using a version comparison (e.g., >= 0.19.0) to ensure future compatibility.

# Count the number of prefix cached tokens.
if request.num_cached_tokens < 0:
request.num_cached_tokens = num_computed_tokens
if vllm_version_is("0.19.0"):
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Exact version matching with vllm_version_is("0.19.0") is fragile and will fail on patch releases or newer minor versions. This should be updated to a "greater than or equal to" comparison to avoid regressions in future vLLM releases.

continue

request.num_external_computed_tokens = ext_tokens
if vllm_version_is("0.19.0"):
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Using exact version matching for feature availability will incorrectly return False for versions like 0.19.1 or 0.20.0. This will likely break the scheduler on any version newer than exactly 0.19.0.

Comment thread vllm_ascend/patch/platform/patch_balance_schedule.py
@wangxiyuan wangxiyuan merged commit 80d22f1 into vllm-project:main Apr 20, 2026
23 checks passed
Pz1116 pushed a commit to Pz1116/vllm-ascend that referenced this pull request Apr 20, 2026
…fferent vllm version (vllm-project#8426)

### What this PR does / why we need it?
This fix vllm-project/vllm#37460
This PR introduces version-specific logic to handle `num_cached_tokens`
and `num_external_computed_tokens` in the scheduler, ensuring
compatibility with vLLM 0.19.0 and maintaining legacy support for older
versions via `prefill_stats`.
### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

- vLLM version: v0.19.0
- vLLM main:
vllm-project/vllm@6f786f2

Signed-off-by: wangli <wangli858794774@gmail.com>
tfhddd pushed a commit to ascend-gha-runners/vllm-ascend that referenced this pull request Apr 21, 2026
…fferent vllm version (vllm-project#8426)

### What this PR does / why we need it?
This fix vllm-project/vllm#37460
This PR introduces version-specific logic to handle `num_cached_tokens`
and `num_external_computed_tokens` in the scheduler, ensuring
compatibility with vLLM 0.19.0 and maintaining legacy support for older
versions via `prefill_stats`.
### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

- vLLM version: v0.19.0
- vLLM main:
vllm-project/vllm@6f786f2

Signed-off-by: wangli <wangli858794774@gmail.com>
Signed-off-by: tfhddd <2272751277@qq.com>
anning-2026 pushed a commit to anning-2026/vllm-ascend that referenced this pull request Apr 21, 2026
…fferent vllm version (vllm-project#8426)

### What this PR does / why we need it?
This fix vllm-project/vllm#37460
This PR introduces version-specific logic to handle `num_cached_tokens`
and `num_external_computed_tokens` in the scheduler, ensuring
compatibility with vLLM 0.19.0 and maintaining legacy support for older
versions via `prefill_stats`.
### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

- vLLM version: v0.19.0
- vLLM main:
vllm-project/vllm@6f786f2

Signed-off-by: wangli <wangli858794774@gmail.com>
guxin108 pushed a commit to guxin108/vllm-ascend that referenced this pull request Apr 24, 2026
…fferent vllm version (vllm-project#8426)

### What this PR does / why we need it?
This fix vllm-project/vllm#37460
This PR introduces version-specific logic to handle `num_cached_tokens`
and `num_external_computed_tokens` in the scheduler, ensuring
compatibility with vLLM 0.19.0 and maintaining legacy support for older
versions via `prefill_stats`.
### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

- vLLM version: v0.19.0
- vLLM main:
vllm-project/vllm@6f786f2

Signed-off-by: wangli <wangli858794774@gmail.com>
Signed-off-by: guxin108 <1252896542@qq.com>
zouyida2052 pushed a commit to zouyida2052/vllm-ascend that referenced this pull request Apr 28, 2026
…fferent vllm version (vllm-project#8426)

### What this PR does / why we need it?
This fix vllm-project/vllm#37460
This PR introduces version-specific logic to handle `num_cached_tokens`
and `num_external_computed_tokens` in the scheduler, ensuring
compatibility with vLLM 0.19.0 and maintaining legacy support for older
versions via `prefill_stats`.
### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

- vLLM version: v0.19.0
- vLLM main:
vllm-project/vllm@6f786f2

Signed-off-by: wangli <wangli858794774@gmail.com>
Signed-off-by: zouyida2052 <zouyida2002@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants