Skip to content

[bugfix][accuracy] Fix ds indexer accuracy problem caused by k rope#7341

Merged
MengqingCao merged 1 commit intovllm-project:mainfrom
rjg-lyh:pr-fix-rope-accuracy
Mar 18, 2026
Merged

[bugfix][accuracy] Fix ds indexer accuracy problem caused by k rope#7341
MengqingCao merged 1 commit intovllm-project:mainfrom
rjg-lyh:pr-fix-rope-accuracy

Conversation

@rjg-lyh
Copy link
Copy Markdown
Collaborator

@rjg-lyh rjg-lyh commented Mar 16, 2026

What this PR does / why we need it?

The rotary algorithm in deepseek indexer should be neox-style instead of gptj style. PR #4641 fix this accuracy bug in original pytorch version. But PR #5701 accidentally removed the fixed code line and reverted the implementation back to the problematic version. This PR fixes it.

Does this PR introduce any user-facing change?

No.

How was this patch tested?

CI passed with new added/existing test.

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request resolves a critical accuracy bug within the deepseek indexer's rotary positional embedding calculation. It rectifies an accidental regression where a previously implemented fix for the neox-style rotary algorithm was reverted, ensuring the correct mathematical operation is applied to maintain model accuracy.

Highlights

  • Deepseek Indexer Bugfix: Addressed a regression in the deepseek indexer's rotary algorithm, restoring the correct neox-style implementation that was previously fixed but later reverted.
  • Rotary Embedding Correction: Modified the vllm_ascend/attention/sfa_v1.py file to use torch_npu.npu_rotary_mul instead of torch_npu.npu_interleave_rope for k_li_pe tensor processing.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • vllm_ascend/attention/sfa_v1.py
    • Replaced torch_npu.npu_interleave_rope with torch_npu.npu_rotary_mul for k_li_pe processing.
Activity
  • CI passed with new added/existing tests.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@github-actions
Copy link
Copy Markdown
Contributor

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses an accuracy issue in the DeepSeek indexer's rotary position embedding (RoPE) implementation. The change reverts an accidental regression, replacing npu_interleave_rope (GPT-J style) with the correct npu_rotary_mul (NeoX style) for the key tensor. This ensures consistency with the query tensor's RoPE application and restores the intended behavior. The fix appears correct and well-justified.

As per the repository's style guide, here are the suggested PR title and summary:

Suggested PR Title:

[Attention][BugFix] Fix ds indexer accuracy problem caused by k rope

Suggested PR Summary:

### What this PR does / why we need it?

The rotary algorithm in the DeepSeek indexer should be neox-style instead of gptj-style. A previous change (PR #4641) corrected this, but the fix was accidentally reverted in PR #5701. This pull request restores the correct neox-style implementation (`npu_rotary_mul`) to fix the accuracy bug.

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

CI passed with new added/existing test.

@rjg-lyh rjg-lyh added ready read for review ready-for-test start test by label for PR labels Mar 16, 2026
@MengqingCao MengqingCao added ready-for-test start test by label for PR and removed ready-for-test start test by label for PR labels Mar 17, 2026
@MengqingCao MengqingCao merged commit c1392a6 into vllm-project:main Mar 18, 2026
92 of 94 checks passed
845473182 pushed a commit to 845473182/vllm-ascend that referenced this pull request Mar 18, 2026
…scend into qwen3next_graph

* 'qwen3next_graph' of https://github.com/845473182/vllm-ascend: (62 commits)
  [doc] Refresh the documentation for DeepSeek-V3.2 (vllm-project#7403)
  [bugfix][accuracy] Fix ds indexer accuracy problem caused by k rope (vllm-project#7341)
  [P/D] LayerwiseConnector supports the virtual push functionality on node D. (vllm-project#7361)
  [CI] Add PAT_TOKEN when checkout (vllm-project#7400)
  [main2main] upgrade vllm to 0308 (vllm-project#7213)
  [CI] add scheduled stale issue management (vllm-project#7354)
  [CI] expand issue labeler rules for feature/model triage (vllm-project#7356)
  [Bugfix] Assertion error when decode prefix cache fully hits (vllm-project#7236)
  [doc] Refresh the documentation for GLM-4.7 (vllm-project#7292)
  [BugFix]A2 MOE method&& layerwise MTP bugfix && Mamba gdn_metadata bugfix (vllm-project#7364)
  [doc] Upload doc for qwen3.5-27B and qwen3.5-397B-A17B on Ascend (vllm-project#7313)
  [bugfix]Enable dispatch_ffn_combine feature for qwen3.5 (vllm-project#7066)
  [bugfix] fix unzip file path for fia operator (vllm-project#7367)
  [Perf] Optimize bias handling in AscendRMSNorm (vllm-project#7226)
  [eagle3][pcp] fix bug for eagle3 and cp enable (vllm-project#7309)
  [Bugfix] fix TransposeKvCacheByBlock op error report in plog (vllm-project#7235)
  [Feature]Supports DSv3.1 PD separation and C8 quantization (vllm-project#7222)
  [main][bugfix] Fixed the problem that eagle3 will crash in FULL_DECODE_ONLY (vllm-project#7290)
  [xlite][Bugfix] Support mrope and deepstack features in xlite backend (vllm-project#7295)
  [model_runner_v2]optimize the performance of the _topk_log_softmax_kernel (vllm-project#7221)
  ...
starmountain1997 pushed a commit to starmountain1997/vllm-ascend that referenced this pull request Mar 25, 2026
…llm-project#7341)

### What this PR does / why we need it?
The rotary algorithm in deepseek indexer should be neox-style instead of
gptj style. PR vllm-project#4641 fix this accuracy bug in original pytorch version.
But PR vllm-project#5701 accidentally removed the fixed code line and reverted the
implementation back to the problematic version. This PR fixes it.

Signed-off-by: rjg-lyh <1318825571@qq.com>
lihaokun-2026 pushed a commit to lihaokun-2026/vllm-ascend that referenced this pull request Mar 29, 2026
…llm-project#7341)

### What this PR does / why we need it?
The rotary algorithm in deepseek indexer should be neox-style instead of
gptj style. PR vllm-project#4641 fix this accuracy bug in original pytorch version.
But PR vllm-project#5701 accidentally removed the fixed code line and reverted the
implementation back to the problematic version. This PR fixes it.

Signed-off-by: rjg-lyh <1318825571@qq.com>
chenchuw886 pushed a commit to chenchuw886/vllm-ascend that referenced this pull request Apr 1, 2026
…llm-project#7341)

### What this PR does / why we need it?
The rotary algorithm in deepseek indexer should be neox-style instead of
gptj style. PR vllm-project#4641 fix this accuracy bug in original pytorch version.
But PR vllm-project#5701 accidentally removed the fixed code line and reverted the
implementation back to the problematic version. This PR fixes it.

Signed-off-by: rjg-lyh <1318825571@qq.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready read for review ready-for-test start test by label for PR

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants