Skip to content

[Refactor]Reconstructing TP asymmetry and C8 quantization buffer allocation#7658

Open
pichangping wants to merge 17 commits intovllm-project:mainfrom
pichangping:main
Open

[Refactor]Reconstructing TP asymmetry and C8 quantization buffer allocation#7658
pichangping wants to merge 17 commits intovllm-project:mainfrom
pichangping:main

Conversation

@pichangping
Copy link
Copy Markdown
Contributor

@pichangping pichangping commented Mar 26, 2026

What this PR does / why we need it?

The legacy issue of pr #7222 :Unifying and optimizing the creation of buffers for TP asymmetry and C8 quantization in PD separation scenarios.

Does this PR introduce any user-facing change?

no

How was this patch tested?

@github-actions
Copy link
Copy Markdown
Contributor

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a legacy issue by refactoring and unifying the buffer allocation strategy for KV caches, specifically for scenarios involving Tensor Parallelism (TP) asymmetry and C8 quantization. By consolidating dedicated quantization buffers into the main KV buffers, the change simplifies memory management and streamlines the attention processing logic, leading to a more robust and efficient system.

Highlights

  • KV Cache Buffer Unification: The dedicated k_quant_buffer and v_quant_buffer have been removed, and their functionality is now integrated into the main k_buffer and v_buffer. This unifies buffer allocation for both Tensor Parallelism (TP) asymmetry and C8 quantization.
  • Attention Logic Streamlining: The attention mechanism's _forward_decode method now uses a unified logic for input layout and q_nope processing when fa_quant_layer or enable_kv_nz is active, removing a redundant elif block.
  • Dynamic Dequantization Scale Reshaping: Added logic to dynamically reshape dequant_scale_q_nope based on whether speculative decoding (MTP) is enabled or disabled, ensuring correct dimensions for the FIA (Fused Attention) operation.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors KV cache buffer management by consolidating separate quantization buffers into the main k_buffer and v_buffer. It updates the _forward_decode and _mla_preprocess_only_decode functions to correctly handle fa_quant_layer and dequant_scale_q_nope reshaping, integrating quantization logic more directly into the main KV buffer handling. A critical issue was identified where the v_buffer might be initialized with an incorrect data type, potentially causing mismatches in mixed-precision quantization scenarios.

)
self.k_quant_buffer = align_memory(self.k_quant_buffer, alignment)[:quant_k_cache_numel].view(
-1, first_kv_cache.shape[-1]
first_v_cache_numel + alignment, dtype=first_k_cache.dtype, device=first_k_cache.device
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The v_buffer is being created with first_k_cache.dtype. In scenarios with mixed-precision KV cache quantization (like C8 where K is int8 and V is float16/bfloat16), this could lead to a dtype mismatch. It seems first_v_cache.dtype should be used here to ensure the buffer for V cache has the correct data type, especially when self.enable_kv_quant is true.

Suggested change
first_v_cache_numel + alignment, dtype=first_k_cache.dtype, device=first_k_cache.device
first_v_cache_numel + alignment, dtype=first_v_cache.dtype, device=first_v_cache.device

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

kvcache should reside on a single device, so the device here is always named first_k_cache.device.

Signed-off-by: pichangping <1337510399@qq.com>
Signed-off-by: pichangping <1337510399@qq.com>
@pichangping pichangping changed the title [refector]Reconstructing TP asymmetry and C8 quantization buffer allocation [refactor]Reconstructing TP asymmetry and C8 quantization buffer allocation Mar 26, 2026
pichangping and others added 3 commits March 26, 2026 11:40
Signed-off-by: pichangping <1337510399@qq.com>
Signed-off-by: pichangping <1337510399@qq.com>
@wangxiyuan wangxiyuan added ready read for review ready-for-test start test by label for PR labels Mar 26, 2026
"""When MTP is enabled or disabled, the different input_layout results in different
dimensions of dequant_scale_q_nope required by FIA.
"""
if self.speculative_config is None:
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The attn_metadata.attn_state status is not checked. However, since C8 quantization is performed only at the D node, the condition on line 1300~1305 is already met, so there is no problem.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes

@github-actions
Copy link
Copy Markdown
Contributor

This pull request has conflicts, please resolve those before we can evaluate the pull request.

…o main

# Conflicts:
#	vllm_ascend/distributed/kv_transfer/kv_p2p/mooncake_layerwise_connector.py
@pichangping pichangping reopened this Apr 3, 2026
@pichangping pichangping changed the title [refactor]Reconstructing TP asymmetry and C8 quantization buffer allocation [Refactor]Reconstructing TP asymmetry and C8 quantization buffer allocation Apr 3, 2026
@github-actions
Copy link
Copy Markdown
Contributor

This pull request has conflicts, please resolve those before we can evaluate the pull request.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

merge-conflicts ready read for review ready-for-test start test by label for PR

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants