Skip to content

[model_runner_v2]optimize the performance of the post_update.#7496

Merged
weijinqian0 merged 6 commits intovllm-project:mainfrom
weijinqian0:triton_post_update
Mar 23, 2026
Merged

[model_runner_v2]optimize the performance of the post_update.#7496
weijinqian0 merged 6 commits intovllm-project:mainfrom
weijinqian0:triton_post_update

Conversation

@weijinqian0
Copy link
Copy Markdown
Collaborator

@weijinqian0 weijinqian0 commented Mar 20, 2026

What this PR does / why we need it?

  • This PR aims to enhance the operator performance in the post_update phase of model_runner_v2 on NPUs. By optimizing the relevant operations, it is expected to improve the overall efficiency and speed of the model running on NPU hardware, which is crucial for scenarios where high-performance inference is required.
  • when bs = 256, time cost reduce from 26us to 11 us;

Does this PR introduce any user-facing change?

No, there are no changes to the API, interface, or other high-level behaviors that would directly affect the user's code or interaction with the system beyond the performance improvement.

How was this patch tested?

CI passed with new added/existing tests. In addition to the regular CI tests, specific benchmark tests were conducted on NPU hardware to measure the performance improvement of the post_update operators.

weijinqian_v1 added 2 commits March 20, 2026 09:57
…rnel.

Signed-off-by: weijinqian_v1 <weijinqian@huawei.com>
Signed-off-by: weijinqian_v1 <weijinqian@huawei.com>
@github-actions
Copy link
Copy Markdown
Contributor

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request focuses on a critical performance improvement for the model_runner_v2 component, specifically targeting the post_update operation on NPU hardware. By refactoring this operation to leverage Triton kernels, the change aims to enhance the overall efficiency and speed of model inference, which is crucial for high-performance scenarios. The optimization is entirely internal and does not introduce any user-facing API changes, ensuring a seamless upgrade for users while providing substantial backend performance gains.

Highlights

  • Performance Optimization: Optimized the post_update phase within model_runner_v2 for NPU hardware, significantly reducing execution time from 26us to 11us for a batch size of 256.
  • Triton Kernel Implementation: Introduced a new Triton kernel, _post_update_kernel, to handle the post_update logic efficiently on NPUs, replacing previous implementations.
  • New Test Coverage: Added a dedicated end-to-end test (test_post_update.py) to validate the correctness and performance of the NPU-optimized post_update function against a GPU reference.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a Triton kernel to optimize the post_update operation for model_runner_v2 on NPUs, which shows a significant performance improvement. A new end-to-end test is added to ensure the correctness of the new implementation by comparing it against the original PyTorch version. My review found one issue in the new test file where the assertion's debug message was incorrectly calculating the difference, which I've provided a suggestion to fix.

@weijinqian0 weijinqian0 added ready read for review ready-for-test start test by label for PR labels Mar 20, 2026
@weijinqian0 weijinqian0 merged commit bdd90c0 into vllm-project:main Mar 23, 2026
38 checks passed
@Ronald1995 Ronald1995 mentioned this pull request Mar 24, 2026
35 tasks
starmountain1997 pushed a commit to starmountain1997/vllm-ascend that referenced this pull request Mar 25, 2026
…roject#7496)

### What this PR does / why we need it?
- This PR aims to enhance the operator performance in the `post_update`
phase of `model_runner_v2` on NPUs. By optimizing the relevant
operations, it is expected to improve the overall efficiency and speed
of the model running on NPU hardware, which is crucial for scenarios
where high-performance inference is required.
- when bs = 256, time cost reduce from 26us to 11 us; 

### Does this PR introduce _any_ user-facing change?
No, there are no changes to the API, interface, or other high-level
behaviors that would directly affect the user's code or interaction with
the system beyond the performance improvement.

### How was this patch tested?
CI passed with new added/existing tests. In addition to the regular CI
tests, specific benchmark tests were conducted on NPU hardware to
measure the performance improvement of the `post_update` operators.

---------

Signed-off-by: weijinqian_v1 <weijinqian@huawei.com>
Co-authored-by: weijinqian_v1 <weijinqian@huawei.com>
winson-00178005 added a commit to winson-00178005/vllm-ascend that referenced this pull request Mar 26, 2026
- Remove is_skipped flag from tests/e2e/singlecard/model_runner_v2/test_basic.py
- Test was originally skipped due to get_cuda_view_from_cpu_tensor error (vllm-project#5752)
- Recent model_runner_v2 improvements may have resolved the issue:
  - vllm-project#7110: Added aclgraph support
  - vllm-project#7496: Optimized post_update performance
  - vllm-project#7221: Optimized _topk_log_softmax_kernel performance
- CI will verify if the test now passes successfully

Signed-off-by: hejianping <hejianping7@huawei.com>
winson-00178005 added a commit to winson-00178005/vllm-ascend that referenced this pull request Mar 26, 2026
- Remove is_skipped flag from tests/e2e/singlecard/model_runner_v2/test_basic.py
- Test was originally skipped due to get_cuda_view_from_cpu_tensor error (vllm-project#5752)
- Recent model_runner_v2 improvements may have resolved the issue:
  - vllm-project#7110: Added aclgraph support
  - vllm-project#7496: Optimized post_update performance
  - vllm-project#7221: Optimized _topk_log_softmax_kernel performance
- CI will verify if test now passes successfully

Signed-off-by: hejianping <hejianping7@huawei.com>
winson-00178005 added a commit to winson-00178005/vllm-ascend that referenced this pull request Mar 26, 2026
- Remove is_skipped flag from tests/e2e/singlecard/model_runner_v2/test_basic.py
- Test was originally skipped due to get_cuda_view_from_cpu_tensor error (vllm-project#5752)
- Recent model_runner_v2 improvements may have resolved the issue:
  - vllm-project#7110: Added aclgraph support
  - vllm-project#7496: Optimized post_update performance
  - vllm-project#7221: Optimized _topk_log_softmax_kernel performance
- CI will verify if the test now passes successfully

Signed-off-by: hejianping <hejianping7@huawei.com>
lihaokun-2026 pushed a commit to lihaokun-2026/vllm-ascend that referenced this pull request Mar 29, 2026
…roject#7496)

### What this PR does / why we need it?
- This PR aims to enhance the operator performance in the `post_update`
phase of `model_runner_v2` on NPUs. By optimizing the relevant
operations, it is expected to improve the overall efficiency and speed
of the model running on NPU hardware, which is crucial for scenarios
where high-performance inference is required.
- when bs = 256, time cost reduce from 26us to 11 us; 

### Does this PR introduce _any_ user-facing change?
No, there are no changes to the API, interface, or other high-level
behaviors that would directly affect the user's code or interaction with
the system beyond the performance improvement.

### How was this patch tested?
CI passed with new added/existing tests. In addition to the regular CI
tests, specific benchmark tests were conducted on NPU hardware to
measure the performance improvement of the `post_update` operators.

---------

Signed-off-by: weijinqian_v1 <weijinqian@huawei.com>
Co-authored-by: weijinqian_v1 <weijinqian@huawei.com>
chenchuw886 pushed a commit to chenchuw886/vllm-ascend that referenced this pull request Apr 1, 2026
…roject#7496)

### What this PR does / why we need it?
- This PR aims to enhance the operator performance in the `post_update`
phase of `model_runner_v2` on NPUs. By optimizing the relevant
operations, it is expected to improve the overall efficiency and speed
of the model running on NPU hardware, which is crucial for scenarios
where high-performance inference is required.
- when bs = 256, time cost reduce from 26us to 11 us; 

### Does this PR introduce _any_ user-facing change?
No, there are no changes to the API, interface, or other high-level
behaviors that would directly affect the user's code or interaction with
the system beyond the performance improvement.

### How was this patch tested?
CI passed with new added/existing tests. In addition to the regular CI
tests, specific benchmark tests were conducted on NPU hardware to
measure the performance improvement of the `post_update` operators.

---------

Signed-off-by: weijinqian_v1 <weijinqian@huawei.com>
Co-authored-by: weijinqian_v1 <weijinqian@huawei.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

module:tests ready read for review ready-for-test start test by label for PR

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants