Skip to content

[Feat](sfa,dcp) support dcp for sfa#6563

Merged
wangxiyuan merged 5 commits intovllm-project:mainfrom
pisceskkk:sfa/dcp
Feb 9, 2026
Merged

[Feat](sfa,dcp) support dcp for sfa#6563
wangxiyuan merged 5 commits intovllm-project:mainfrom
pisceskkk:sfa/dcp

Conversation

@pisceskkk
Copy link
Copy Markdown
Contributor

@pisceskkk pisceskkk commented Feb 5, 2026

What this PR does / why we need it?

This PR adds DCP support to the SFA backend.

Please note that due to operator constraints, the current implementation has to all-gather the entire KV cache and modify the block table to satisfy the operator input requirements. This results in significantly increased communication overhead and peak memory usage. Therefore, this is only a temporary workaround and will be refactored once the operator provides proper support.

Additionally, because of the above limitations, cp_kv_cache_interleave_size is currently required to be equal to block_size. This restriction will also be removed after the refactor.

Test

accuracy test using DeepSeek-V3.2-Exp-W8A8 with dp2tp8dcp8

dataset version metric mode vllm-api-general-stream
gsm8kdataset - accuracy gen 96.35

@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented Feb 5, 2026

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @pisceskkk, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces support for Decode Context Parallelism (DCP) to the Sparse Flash Attention (SFA) backend. The changes involve adapting the KV cache and block table management to function correctly across parallel contexts, which currently necessitates an all-gather operation for the entire KV cache. This temporary approach, while functional, increases communication overhead and memory usage, and imposes a temporary restriction on cp_kv_cache_interleave_size. Future refactoring is planned to optimize these aspects.

Highlights

  • DCP Support for SFA: Enabled Decode Context Parallelism (DCP) within the Sparse Flash Attention (SFA) backend, allowing it to operate in distributed environments.
  • Temporary KV Cache Handling: Implemented a temporary workaround that involves all-gathering the entire KV cache and modifying the block table. This is necessary to meet current operator input requirements but may lead to increased communication overhead and peak memory usage, with plans for future optimization.
  • cp_kv_cache_interleave_size Restriction: Introduced a temporary constraint for SFA's PCP&DCP implementation, requiring cp_kv_cache_interleave_size to be equal to block_size. The system now automatically overrides this value and issues a warning if the condition is not met.
  • Parallel Group Integration: Integrated Prefill Context Parallelism (PCP) and Decode Context Parallelism (DCP) group information (size, rank, and group objects) directly into the SfaV1Attention class for better distributed control.
  • Cross-Parallel KV Cache Gathering: Added a new utility method, gather_kv_cross_cp, to efficiently perform all_gather operations on KV cache and block tables across the distributed parallel contexts.
  • Attention Mechanism Updates: Modified the npu_sparse_flash_attention and npu_lightning_indexer calls to utilize the globally gathered KV cache and block tables when parallel contexts are active, ensuring correct data access in distributed settings.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • vllm_ascend/ascend_config.py
    • Refactored the use_sparse variable definition to be outside the enable_kv_nz conditional block.
    • Added logic to enforce cp_kv_cache_interleave_size == block_size for SFA's PCP&DCP implementation, issuing a warning and overriding the value if necessary.
  • vllm_ascend/attention/sfa_v1.py
    • Imported get_dcp_group and get_pcp_group for distributed context management.
    • Initialized pcp_size, pcp_rank, pcp_group, dcp_size, dcp_rank, and dcp_group attributes within the SfaV1Attention class.
    • Implemented a new gather_kv_cross_cp method for collective communication of KV cache and block tables across parallel groups.
    • Modified the forward method to use gather_kv_cross_cp for kv_cache and block_tables before invoking npu_sparse_flash_attention when parallel contexts are active.
    • Modified the indexer_select_post_process method to use gather_kv_cross_cp for key and block_table before invoking npu_lightning_indexer when parallel contexts are active.
Activity
  • No specific activity (comments, reviews, etc.) has been provided for this pull request.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented Feb 5, 2026

This pull request has conflicts, please resolve those before we can evaluate the pull request.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly implements support for context parallelism (PCP and DCP) in the SFA attention backend. The implementation is a temporary workaround that involves all-gathering the KV cache, which is clearly noted in the pull request description. The code changes are logical and consistently applied. A constraint on cp_kv_cache_interleave_size is correctly handled by overriding the value and logging a warning. I have not found any high or critical severity issues in the code itself.

Per the repository style guide, I suggest updating the pull request title and summary for clarity and completeness:

Suggested PR Title:

[Attention][Feature] Support Context Parallelism for SFA Backend

Suggested PR Summary:

### What this PR does / why we need it?

This PR adds support for context parallelism (both Prefill Context Parallelism - PCP, and Decode Context Parallelism - DCP) to the Sparse Flash Attention (SFA) backend on Ascend hardware. This enables scaling attention computation across multiple devices for long sequences.

Please note that due to operator constraints, the current implementation has to all-gather the entire KV cache and modify the block table to satisfy the operator input requirements. This results in significantly increased communication overhead and peak memory usage. Therefore, this is only a temporary workaround and will be refactored once the operator provides proper support.

Additionally, because of the above limitations, `cp_kv_cache_interleave_size` is currently required to be equal to `block_size`. This restriction will also be removed after the refactor.

### Does this PR introduce _any_ user-facing change?

Yes. This PR enables a new feature (context parallelism for SFA) for users. It also introduces a temporary constraint where `cp_kv_cache_interleave_size` is forced to be equal to `block_size` when using SFA with context parallelism, with a warning logged to the user.

### How was this patch tested?

CI passed with new added/existing test.

Signed-off-by: QiuChunshuo <qiuchunshuo@huawei.com>
@pisceskkk pisceskkk force-pushed the sfa/dcp branch 2 times, most recently from 0cbfee7 to ba3b821 Compare February 5, 2026 10:50
Signed-off-by: QiuChunshuo <qiuchunshuo@huawei.com>
Signed-off-by: QiuChunshuo <qiuchunshuo@huawei.com>
@pisceskkk pisceskkk force-pushed the sfa/dcp branch 3 times, most recently from b1c48df to 0d0bd27 Compare February 7, 2026 06:21
Signed-off-by: QiuChunshuo <qiuchunshuo@huawei.com>
Signed-off-by: QiuChunshuo <qiuchunshuo@huawei.com>
@weiguihua2 weiguihua2 added ready read for review ready-for-test start test by label for PR labels Feb 9, 2026
@wangxiyuan wangxiyuan merged commit cb7c419 into vllm-project:main Feb 9, 2026
59 of 60 checks passed
845473182 pushed a commit to 845473182/vllm-ascend that referenced this pull request Feb 11, 2026
…to qwen3next_rebase

* 'main' of https://github.com/vllm-project/vllm-ascend:
  [Feat] 310p support MoE W8A8 quantizaition (vllm-project#6641)
  [TEST]add a qwen3-30b acc case with mooncake mempool (vllm-project#6244)
  [MOE Refactor] Remove QuantType in prepare_finalize.py (vllm-project#6534)
  [EPLB] Avoiding eplb's dependency on a specified model (vllm-project#6528)
  [Doc][Misc] Restructure tutorial documentation (vllm-project#6501)
  implement batch invariant with ascendc (vllm-project#6590)
  [Refact]Refact MLA/SFA weight prefetch to consist with moe weight prefetch (vllm-project#6629)
  [Misc] upgrade to vllm main (vllm-project#6646)
  [main][Docs] Fix spelling errors across documentation (vllm-project#6649)
  [bugfix]Fix no attribute 'data' when MLAPO is enable  (vllm-project#6601)
  [DOC]Add Memcache Usage Guide (vllm-project#6476)
  [main][bugfix] Fix spec acceptance rate problem in vllm_0.15.0 (vllm-project#6606)
  [Test][LoRA] Add e2e test for base model inference (vllm-project#6624)
  [refactor]Optimized the kvcache usage of Deepseek v3.2 (vllm-project#6610)
  [Feat](sfa,dcp) support dcp for sfa (vllm-project#6563)
  [BugFix] Add support for rotary_dim parameter when using partial rope in rotary_embedding (vllm-project#6581)
  [fix bug] fix tensor mismatch bug in sigmoid operate test case (vllm-project#6619)
  [Kernel]: Optimize DispatchFFNCombine performance (vllm-project#6468)
  [MISC] Clean up useless env USE_OPTIMIZED_MODEL (vllm-project#6618)
chenchuw886 pushed a commit to chenchuw886/vllm-ascend that referenced this pull request Feb 12, 2026
### What this PR does / why we need it?
This PR adds DCP support to the SFA backend.

Please note that due to operator constraints, the current implementation
has to all-gather the entire KV cache and modify the block table to
satisfy the operator input requirements. This results in significantly
increased communication overhead and peak memory usage. Therefore, this
is only a temporary workaround and will be refactored once the operator
provides proper support.

Additionally, because of the above limitations,
`cp_kv_cache_interleave_size` is currently required to be equal to
`block_size`. This restriction will also be removed after the refactor.

#### Test
accuracy test using DeepSeek-V3.2-Exp-W8A8 with dp2tp8dcp8

| dataset | version | metric | mode | vllm-api-general-stream |
|----- | ----- | ----- | ----- | -----|
| gsm8kdataset | - | accuracy | gen | 96.35 |

- vLLM version: v0.15.0
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.15.0

---------

Signed-off-by: QiuChunshuo <qiuchunshuo@huawei.com>
Signed-off-by: momochenchuw <chenchuw@huawei.com>
ZRJ026 pushed a commit to ZRJ026/vllm-ascend that referenced this pull request Feb 28, 2026
### What this PR does / why we need it?
This PR adds DCP support to the SFA backend.

Please note that due to operator constraints, the current implementation
has to all-gather the entire KV cache and modify the block table to
satisfy the operator input requirements. This results in significantly
increased communication overhead and peak memory usage. Therefore, this
is only a temporary workaround and will be refactored once the operator
provides proper support.

Additionally, because of the above limitations,
`cp_kv_cache_interleave_size` is currently required to be equal to
`block_size`. This restriction will also be removed after the refactor.

#### Test
accuracy test using DeepSeek-V3.2-Exp-W8A8 with dp2tp8dcp8

| dataset | version | metric | mode | vllm-api-general-stream |
|----- | ----- | ----- | ----- | -----|
| gsm8kdataset | - | accuracy | gen | 96.35 |

- vLLM version: v0.15.0
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.15.0

---------

Signed-off-by: QiuChunshuo <qiuchunshuo@huawei.com>
Signed-off-by: zrj026 <zhangrunjiang026@gmail.com>
maoxx241 pushed a commit to maoxx241/vllm-ascend that referenced this pull request Mar 2, 2026
### What this PR does / why we need it?
This PR adds DCP support to the SFA backend.

Please note that due to operator constraints, the current implementation
has to all-gather the entire KV cache and modify the block table to
satisfy the operator input requirements. This results in significantly
increased communication overhead and peak memory usage. Therefore, this
is only a temporary workaround and will be refactored once the operator
provides proper support.

Additionally, because of the above limitations,
`cp_kv_cache_interleave_size` is currently required to be equal to
`block_size`. This restriction will also be removed after the refactor.

#### Test
accuracy test using DeepSeek-V3.2-Exp-W8A8 with dp2tp8dcp8

| dataset | version | metric | mode | vllm-api-general-stream |
|----- | ----- | ----- | ----- | -----|
| gsm8kdataset | - | accuracy | gen | 96.35 |

- vLLM version: v0.15.0
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.15.0

---------

Signed-off-by: QiuChunshuo <qiuchunshuo@huawei.com>
ZRJ026 pushed a commit to ZRJ026/vllm-ascend that referenced this pull request Mar 4, 2026
### What this PR does / why we need it?
This PR adds DCP support to the SFA backend.

Please note that due to operator constraints, the current implementation
has to all-gather the entire KV cache and modify the block table to
satisfy the operator input requirements. This results in significantly
increased communication overhead and peak memory usage. Therefore, this
is only a temporary workaround and will be refactored once the operator
provides proper support.

Additionally, because of the above limitations,
`cp_kv_cache_interleave_size` is currently required to be equal to
`block_size`. This restriction will also be removed after the refactor.

#### Test
accuracy test using DeepSeek-V3.2-Exp-W8A8 with dp2tp8dcp8

| dataset | version | metric | mode | vllm-api-general-stream |
|----- | ----- | ----- | ----- | -----|
| gsm8kdataset | - | accuracy | gen | 96.35 |

- vLLM version: v0.15.0
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.15.0

---------

Signed-off-by: QiuChunshuo <qiuchunshuo@huawei.com>
Signed-off-by: zrj026 <zhangrunjiang026@gmail.com>
LCAIZJ pushed a commit to LCAIZJ/vllm-ascend that referenced this pull request Mar 7, 2026
### What this PR does / why we need it?
This PR adds DCP support to the SFA backend.

Please note that due to operator constraints, the current implementation
has to all-gather the entire KV cache and modify the block table to
satisfy the operator input requirements. This results in significantly
increased communication overhead and peak memory usage. Therefore, this
is only a temporary workaround and will be refactored once the operator
provides proper support.

Additionally, because of the above limitations,
`cp_kv_cache_interleave_size` is currently required to be equal to
`block_size`. This restriction will also be removed after the refactor.

#### Test
accuracy test using DeepSeek-V3.2-Exp-W8A8 with dp2tp8dcp8

| dataset | version | metric | mode | vllm-api-general-stream |
|----- | ----- | ----- | ----- | -----|
| gsm8kdataset | - | accuracy | gen | 96.35 |

- vLLM version: v0.15.0
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.15.0

---------

Signed-off-by: QiuChunshuo <qiuchunshuo@huawei.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

module:core ready read for review ready-for-test start test by label for PR

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants