Skip to content

[Feat] Multi-stream for eplb heat collection and aggregation#4214

Merged
MengqingCao merged 8 commits intovllm-project:mainfrom
dsxsteven:main_1114_multistream-heataggr
Dec 9, 2025
Merged

[Feat] Multi-stream for eplb heat collection and aggregation#4214
MengqingCao merged 8 commits intovllm-project:mainfrom
dsxsteven:main_1114_multistream-heataggr

Conversation

@dsxsteven
Copy link
Copy Markdown
Contributor

@dsxsteven dsxsteven commented Nov 17, 2025

What this PR does / why we need it?

This PR optimizes multistream for eplb heat collection and aggregation

Does this PR introduce any user-facing change?

No

How was this patch tested?

Co-authored-by: Skywalker-EP 173723846@qq.comwalterchenchn@outlook.com

Signed-off-by: daishixun <dsxsteven@sina.com>
@github-actions
Copy link
Copy Markdown
Contributor

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces an asynchronous stream to handle MoE expert load (heat) collection, aiming to optimize performance by overlapping it with other computations. The changes involve creating a dedicated stream and switching to it for MoE load accumulation and gathering.

My review has identified a critical race condition and a high-severity performance issue in vllm_ascend/eplb/eplb_updator.py. The race condition is due to missing synchronization between the main computation stream and the new asynchronous stream, which could lead to incorrect load balancing. The performance issue is related to a buffer being re-allocated on every call, which should be addressed for efficiency. Please see the detailed comments for suggestions on how to fix these issues.

Comment thread vllm_ascend/eplb/eplb_updator.py Outdated
Comment on lines 156 to 171
with npu_stream_switch(moe_load_async_stream()):
self.world_size = dist.get_world_size()
self.device = local_load.device
if self._gather_buffer is None:
shape = (self.world_size, *local_load.shape)
self._gather_buffer = torch.empty(shape,
dtype=local_load.dtype,
device=self.device)

dist.all_gather_into_tensor(self._gather_buffer, local_load)

moe_load = self._gather_buffer.permute(1, 0, 2)
self.shared_dict["moe_load"] = moe_load.cpu()
logger.debug(
f"[ModelRunner] Updated shared_dict['moe_load'] shape={moe_load.shape}"
)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

There is a race condition here. The moe_load tensors are updated asynchronously on moe_load_async_stream in fused_moe.py. However, self.adaptor.get_rank_expert_workload() is called on the default stream (on line 152, before this block) to read these tensors without any synchronization. This can lead to reading stale or incomplete data, causing incorrect load balancing. To fix this, you must synchronize the streams before reading moe_load. For example, you could add moe_load_async_stream().synchronize() before the call to self.adaptor.get_rank_expert_workload() on line 152.

Comment thread vllm_ascend/eplb/eplb_updator.py Outdated
Comment on lines +159 to +163
if self._gather_buffer is None:
shape = (self.world_size, *local_load.shape)
self._gather_buffer = torch.empty(shape,
dtype=local_load.dtype,
device=self.device)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The self._gather_buffer is reset to None on every call to compute_and_set_moe_load (on line 154). This makes this condition always true, causing the buffer to be re-allocated on every invocation, which is inefficient. To avoid this performance issue, self._gather_buffer should be initialized to None in the __init__ method of the EplbUpdator class, and the line self._gather_buffer = None should be removed from this method.

Signed-off-by: daishixun <dsxsteven@sina.com>
Signed-off-by: daishixun <dsxsteven@sina.com>
Signed-off-by: daishixun <dsxsteven@sina.com>
@dsxsteven dsxsteven closed this Nov 18, 2025
@dsxsteven dsxsteven reopened this Nov 18, 2025
@dsxsteven dsxsteven changed the title multistream for eplb heat collection [Feat]multistream for eplb heat collection and aggregation Nov 18, 2025
@dsxsteven dsxsteven changed the title multistream for eplb heat collection [Feat] Multi-stream for eplb heat collection and aggregation Nov 18, 2025
logger.debug(
f"[ModelRunner] Updated shared_dict['moe_load'] shape={moe_load.shape}"
)
with npu_stream_switch(moe_load_async_stream()):
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe better to set moe_load_async_stream as class attribute of EplbUpdator

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

already move this function to eplb module, since other file would also call this stream, so move to eplb utils is better

@whx-sjtu whx-sjtu added ready read for review ready-for-test start test by label for PR labels Nov 19, 2025
@github-actions
Copy link
Copy Markdown
Contributor

This pull request has conflicts, please resolve those before we can evaluate the pull request.

Signed-off-by: daishixun <dsxsteven@sina.com>
Signed-off-by: daishixun <dsxsteven@sina.com>
@dsxsteven dsxsteven force-pushed the main_1114_multistream-heataggr branch from 6c7c895 to e09650a Compare December 4, 2025 03:39
Comment thread vllm_ascend/utils.py Outdated
return _SHARED_EXPERTS_CALCULATION_STREAM


def moe_load_async_stream() -> torch_npu.npu.Stream:
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

move this function to eplb module

Copy link
Copy Markdown
Collaborator

@whx-sjtu whx-sjtu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Signed-off-by: daishixun <dsxsteven@sina.com>
@dsxsteven dsxsteven force-pushed the main_1114_multistream-heataggr branch from cf110ae to d5b59ad Compare December 4, 2025 11:20
Copy link
Copy Markdown
Collaborator

@MengqingCao MengqingCao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thx!

@MengqingCao MengqingCao merged commit 9a885d0 into vllm-project:main Dec 9, 2025
16 of 18 checks passed
Clorist33 pushed a commit to Clorist33/vllm-ascend that referenced this pull request Dec 10, 2025
…oject#4214)

### What this PR does / why we need it?
This PR optimizes multistream for eplb heat collection and aggregation

- vLLM version: v0.12.0
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.12.0

---------

Signed-off-by: daishixun <dsxsteven@sina.com>
Co-authored-by: Mengqing Cao <cmq0113@163.com>
Mercykid-bash pushed a commit to Mercykid-bash/vllm-ascend that referenced this pull request Dec 10, 2025
…oject#4214)

### What this PR does / why we need it?
This PR optimizes multistream for eplb heat collection and aggregation

- vLLM version: v0.12.0
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.12.0

---------

Signed-off-by: daishixun <dsxsteven@sina.com>
Co-authored-by: Mengqing Cao <cmq0113@163.com>
wangyibo1005 added a commit to wangyibo1005/vllm-ascend that referenced this pull request Dec 31, 2025
wangyibo1005 added a commit to wangyibo1005/vllm-ascend that referenced this pull request Dec 31, 2025
wangyibo1005 added a commit to wangyibo1005/vllm-ascend that referenced this pull request Dec 31, 2025
…llm-project#4214)"

This reverts commit 9a885d0.

Signed-off-by: Wangyibo1005 <2633333316@qq.com>
wangyibo1005 added a commit to wangyibo1005/vllm-ascend that referenced this pull request Dec 31, 2025
…llm-project#4214)"

This reverts commit 9a885d0.

Signed-off-by: Wangyibo1005 <2633333316@qq.com>
@dsxsteven dsxsteven deleted the main_1114_multistream-heataggr branch March 10, 2026 03:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

module:ops ready read for review ready-for-test start test by label for PR

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants