[Perf] Optimize bias handling in AscendRMSNorm#7226
[Perf] Optimize bias handling in AscendRMSNorm#7226MengqingCao merged 1 commit intovllm-project:mainfrom
Conversation
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request aims to enhance the performance of Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces an optimization in AscendRMSNorm to prevent unnecessary bias additions by introducing a bias_loaded flag. The changes are a good step towards improving performance. However, the optimization is incomplete as it has not been applied to all code paths within the forward_oot method. Specifically, the branch handling a non-None residual still relies on the old check, which can result in redundant computations. I've added a comment with more details.
| super().__init__(hidden_size, eps, var_hidden_size, has_weight, dtype) | ||
| vllm_config = get_current_vllm_config() | ||
| self.bias = None | ||
| self.bias_loaded = False |
There was a problem hiding this comment.
While adding the bias_loaded flag is a good optimization, it's not used consistently throughout the forward_oot method. The branch that handles a non-None residual (lines 70-79) still checks if self.bias is not None and passes self.bias to the custom op unconditionally. This can lead to unnecessary bias additions with a zero tensor, which this PR aims to prevent. To make the optimization effective in all cases, this logic should also be updated to use self.bias_loaded.
|
👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:
If CI fails, you can run linting and testing checks locally according Contributing and Testing. |
Signed-off-by: rjg-lyh <1318825571@qq.com>
| self.bias = torch.nn.Parameter(torch.zeros(hidden_size), requires_grad=False) | ||
| self.bias.weight_loader = self._bias_weight_loader | ||
|
|
||
| def _bias_weight_loader(self, param: torch.nn.Parameter, loaded_weight: torch.Tensor) -> None: |
There was a problem hiding this comment.
It is better to wrap the original weight loader such that we don't need to implement the details.
There was a problem hiding this comment.
I think the custom op has no weight loader function? I don't get your point. If you still have any question on this, plz feel free to open a new pr. This pr is ready for merge now.
…scend into qwen3next_graph * 'qwen3next_graph' of https://github.com/845473182/vllm-ascend: (62 commits) [doc] Refresh the documentation for DeepSeek-V3.2 (vllm-project#7403) [bugfix][accuracy] Fix ds indexer accuracy problem caused by k rope (vllm-project#7341) [P/D] LayerwiseConnector supports the virtual push functionality on node D. (vllm-project#7361) [CI] Add PAT_TOKEN when checkout (vllm-project#7400) [main2main] upgrade vllm to 0308 (vllm-project#7213) [CI] add scheduled stale issue management (vllm-project#7354) [CI] expand issue labeler rules for feature/model triage (vllm-project#7356) [Bugfix] Assertion error when decode prefix cache fully hits (vllm-project#7236) [doc] Refresh the documentation for GLM-4.7 (vllm-project#7292) [BugFix]A2 MOE method&& layerwise MTP bugfix && Mamba gdn_metadata bugfix (vllm-project#7364) [doc] Upload doc for qwen3.5-27B and qwen3.5-397B-A17B on Ascend (vllm-project#7313) [bugfix]Enable dispatch_ffn_combine feature for qwen3.5 (vllm-project#7066) [bugfix] fix unzip file path for fia operator (vllm-project#7367) [Perf] Optimize bias handling in AscendRMSNorm (vllm-project#7226) [eagle3][pcp] fix bug for eagle3 and cp enable (vllm-project#7309) [Bugfix] fix TransposeKvCacheByBlock op error report in plog (vllm-project#7235) [Feature]Supports DSv3.1 PD separation and C8 quantization (vllm-project#7222) [main][bugfix] Fixed the problem that eagle3 will crash in FULL_DECODE_ONLY (vllm-project#7290) [xlite][Bugfix] Support mrope and deepstack features in xlite backend (vllm-project#7295) [model_runner_v2]optimize the performance of the _topk_log_softmax_kernel (vllm-project#7221) ...
### What this PR does / why we need it? This PR optimizes bias handling in `AscendRMSNorm` without changing the intended functional behavior. In the current implementation, bias may be initialized for `AscendRMSNorm` based on configuration-level detection, even though some norm layers never actually load a bias weight. This can cause the inference path to enter the bias branch and execute an unnecessary `add_` operator. To improve this, this PR introduces a loader-based flag to record whether the bias has actually been loaded. The bias addition is then executed only when the bias is truly present. This optimization reduces redundant computation in inference and makes the bias application logic better aligned with the actual model weights. - vLLM version: v0.17.0 - vLLM main: vllm-project/vllm@4034c3d Signed-off-by: rjg-lyh <1318825571@qq.com>
### What this PR does / why we need it? This PR optimizes bias handling in `AscendRMSNorm` without changing the intended functional behavior. In the current implementation, bias may be initialized for `AscendRMSNorm` based on configuration-level detection, even though some norm layers never actually load a bias weight. This can cause the inference path to enter the bias branch and execute an unnecessary `add_` operator. To improve this, this PR introduces a loader-based flag to record whether the bias has actually been loaded. The bias addition is then executed only when the bias is truly present. This optimization reduces redundant computation in inference and makes the bias application logic better aligned with the actual model weights. - vLLM version: v0.17.0 - vLLM main: vllm-project/vllm@4034c3d Signed-off-by: rjg-lyh <1318825571@qq.com>
### What this PR does / why we need it? This PR optimizes bias handling in `AscendRMSNorm` without changing the intended functional behavior. In the current implementation, bias may be initialized for `AscendRMSNorm` based on configuration-level detection, even though some norm layers never actually load a bias weight. This can cause the inference path to enter the bias branch and execute an unnecessary `add_` operator. To improve this, this PR introduces a loader-based flag to record whether the bias has actually been loaded. The bias addition is then executed only when the bias is truly present. This optimization reduces redundant computation in inference and makes the bias application logic better aligned with the actual model weights. - vLLM version: v0.17.0 - vLLM main: vllm-project/vllm@4034c3d Signed-off-by: rjg-lyh <1318825571@qq.com>
What this PR does / why we need it?
This PR optimizes bias handling in
AscendRMSNormwithout changing the intendedfunctional behavior.
In the current implementation, bias may be initialized for
AscendRMSNormbasedon configuration-level detection, even though some norm layers never actually
load a bias weight. This can cause the inference path to enter the bias branch
and execute an unnecessary
add_operator.To improve this, this PR introduces a loader-based flag to record whether the
bias has actually been loaded. The bias addition is then executed only when the
bias is truly present.
This optimization reduces redundant computation in inference and makes the bias
application logic better aligned with the actual model weights.
Does this PR introduce any user-facing change?
No.
How was this patch tested?
CI passed with new added/existing test.
Before:

rmsnorm+add
After:

just rmsnorm