Skip to content

[Feature] enable index Cache for npu#8324

Closed
ChefWu551 wants to merge 1 commit intovllm-project:mainfrom
ChefWu551:index-cache-npu
Closed

[Feature] enable index Cache for npu#8324
ChefWu551 wants to merge 1 commit intovllm-project:mainfrom
ChefWu551:index-cache-npu

Conversation

@ChefWu551
Copy link
Copy Markdown
Contributor

@ChefWu551 ChefWu551 commented Apr 16, 2026

Motivation

Implemented the corresponding NPU adaptation based on upstream IndexCache work.

This PR adds Ascend NPU adaptation for IndexCache in vLLM-Ascend, based on:

Modifications

This PR includes NPU-oriented integration and adaptation for IndexCache in vLLM-Ascend

Accuracy Tests

TODO: Accuracy evaluation data will be added in a follow-up update.

Benchmarking and Profiling

TODO: Benchmark and profiling data (IndexCache on/off) will be added in a follow-up update.

Checklist

@github-actions
Copy link
Copy Markdown
Contributor

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request integrates IndexCache functionality into the vLLM-Ascend stack. By allowing attention layers to share and reuse top-k indices, the changes reduce redundant computations during model inference, aligning the NPU implementation with upstream vLLM improvements.

Highlights

  • IndexCache Integration: Enabled IndexCache support for Ascend NPU by implementing shared top-k index reuse across attention layers.
  • Performance Optimization: Introduced a skip_topk mechanism to avoid redundant index calculations during the forward pass, improving efficiency.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a mechanism to share topk_indices across layers in AscendSFAMetadata using a skip_topk flag to optimize performance. A critical issue was identified where the shared indices are incorrectly reused despite being dependent on layer-specific hidden_states.

Suggested PR Title:

[Attention][Ops][Feature] Support shared top-k indices in SFA

Suggested PR Summary:

### What this PR does / why we need it?
This PR adds a `shared_topk_indices` field to `AscendSFAMetadata` and a `skip_topk` flag to the attention and MLA modules. This allows layers to reuse previously computed top-k indices to reduce redundant computations.

Feedback: A critical flaw was identified where `topk_indices` are computed from layer-specific `hidden_states`, making the reuse of these indices across layers mathematically incorrect.

### Does this PR introduce _any_ user-facing change?
No.

### How was this patch tested?
CI passed.

Comment on lines +1235 to +1248
if self.skip_topk and attn_metadata.shared_topk_indices is not None:
topk_indices = attn_metadata.shared_topk_indices
else:
topk_indices = self.indexer_select_post_process(
x=hidden_states,
q_c=q_c,
kv_cache=kv_cache,
attn_metadata=attn_metadata,
cos=cos,
sin=sin,
actual_seq_lengths_query=actual_seq_lengths_query,
actual_seq_lengths_key=actual_seq_lengths_key,
)
attn_metadata.shared_topk_indices = topk_indices
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The current implementation of IndexCache appears to have a critical flaw. The topk_indices are cached and reused across layers, but their computation in indexer_select_post_process depends on hidden_states, which is unique to each layer.

Specifically, indexer_select_post_process uses x (which is hidden_states) to compute weights:

weights, _ = self.weights_proj(x)

These weights are then used to determine topk_indices. Since hidden_states differ from one layer to the next, the topk_indices will also be different. Reusing them will lead to incorrect attention calculations.

For IndexCache to work correctly, the computation of topk_indices must be based on tensors that are shared across the layers intended to use the cache. This might require passing a shared tensor to indexer_select_post_process instead of the per-layer hidden_states.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the review. This behavior is intentional and matches upstream IndexCache semantics (vLLM PR #37735, issue #37684).
IndexCache is an approximate optimization: “full” layers compute top-k indices, while “shared” layers reuse cached indices to reduce redundant computation.
In our implementation, reuse is only enabled when skip_topk=True; otherwise indices are computed per layer as usual.
We’ll also attach accuracy/performance results to quantify the tradeoff.

@ChefWu551 ChefWu551 changed the title [NPU] enable index Cache for npu [Feature] enable index Cache for npu Apr 16, 2026
@ChefWu551
Copy link
Copy Markdown
Contributor Author

This PR has been closed due to implementation issues. PR #8398 has fixed the corresponding functionality and provided relevant benchmark data, showing an improvement of 16%–18%.

@ChefWu551 ChefWu551 closed this Apr 17, 2026
@ChefWu551 ChefWu551 deleted the index-cache-npu branch April 17, 2026 12:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant