Skip to content

[Bugfix] fix the precision issues that may raise from the inter-layer reuse of the workspace in certain scenarios#5522

Merged
yiz-liu merged 1 commit intovllm-project:mainfrom
WithHades:workspace_fix
Dec 31, 2025
Merged

[Bugfix] fix the precision issues that may raise from the inter-layer reuse of the workspace in certain scenarios#5522
yiz-liu merged 1 commit intovllm-project:mainfrom
WithHades:workspace_fix

Conversation

@WithHades
Copy link
Copy Markdown
Contributor

@WithHades WithHades commented Dec 30, 2025

What this PR does / why we need it?

In the current process of implementing attention updates, the FIA operator shares a single workspace among different layers within the same computation graph. To enable memory reuse, we adopt the weak_ref_tensor mechanism. However, this approach may lead to precision anomalies in certain scenarios. To address this issue, different layers in the same computation graph are assigned independent workspaces.

Does this PR introduce any user-facing change?

How was this patch tested?

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses a precision issue related to workspace reuse in ACL graph capturing by modifying how workspace tensor references are managed. The change involves holding strong references to workspaces during graph capture to prevent premature garbage collection and then converting them to weak references afterward to conserve memory.

The overall approach is sound. However, I've found a critical issue in the new code that converts workspaces to weak references. It doesn't handle cases where graph parameters or specific workspaces are None, which can lead to a crash. I've provided a code suggestion to make this logic more robust and prevent potential runtime errors.

Additionally, for future clarity, it might be worth noting that the PR description states that 'different layers in the same computation graph are assigned independent workspaces', but the implementation appears to fix the issue by improving the lifecycle management of shared workspaces rather than making them independent. Aligning the description with the implementation could help future maintainers.

Comment thread vllm_ascend/compilation/acl_graph.py Outdated
Comment on lines +169 to +174
for num_tokens in _graph_params.workspaces:
_graph_params.workspaces[num_tokens] = weak_ref_tensors(
_graph_params.workspaces[num_tokens])
for num_tokens in _draft_graph_params.workspaces:
_draft_graph_params.workspaces[num_tokens] = weak_ref_tensors(
_draft_graph_params.workspaces[num_tokens])
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

This block of code has two potential issues that could lead to a crash:

  1. _graph_params or _draft_graph_params can be None, which would cause an AttributeError when trying to access .workspaces.
  2. The workspaces dictionary is initialized with None values. The loop iterates over all keys, so _graph_params.workspaces[num_tokens] could be None. Passing None to weak_ref_tensors raises a ValueError.

I suggest refactoring this to be more robust and DRY by iterating over the parameter objects and checking for None at both levels.

            for params in (_graph_params, _draft_graph_params):
                if params is not None:
                    for num_tokens, workspace in params.workspaces.items():
                        if workspace is not None:
                            params.workspaces[num_tokens] = weak_ref_tensors(workspace)

@github-actions
Copy link
Copy Markdown
Contributor

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

Copy link
Copy Markdown
Collaborator

@yiz-liu yiz-liu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please fix other attn impl as well like full_graph_fia or mla. @WithHades

@yiz-liu
Copy link
Copy Markdown
Collaborator

yiz-liu commented Dec 31, 2025

Please fix other attn impl as well like full_graph_fia or mla. @WithHades

Turns out that PA weak_ref_tensors twice before, other impl don't need to be fixed.

@wangxiyuan wangxiyuan added ready read for review ready-for-test start test by label for PR labels Dec 31, 2025
… reuse of the workspace in certain scenarios.

Signed-off-by: WithHades <244036962@qq.com>
@yiz-liu yiz-liu merged commit 03679cf into vllm-project:main Dec 31, 2025
19 checks passed
wjunLu pushed a commit to wjunLu/vllm-ascend that referenced this pull request Jan 4, 2026
… reuse of the workspace in certain scenarios (vllm-project#5522)

### What this PR does / why we need it?

In the current process of implementing attention updates, the FIA
operator shares a single workspace among different layers within the
same computation graph. To enable memory reuse, we adopt the
weak_ref_tensor mechanism. However, this approach may lead to precision
anomalies in certain scenarios. To address this issue, different layers
in the same computation graph are assigned independent workspaces.

### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

- vLLM version: v0.13.0
- vLLM main:
vllm-project/vllm@45c1ca1

Signed-off-by: WithHades <244036962@qq.com>
Signed-off-by: wjunLu <wjunlu217@gmail.com>
Rozwel-dx pushed a commit to Rozwel-dx/vllm-ascend that referenced this pull request Jan 8, 2026
… reuse of the workspace in certain scenarios (vllm-project#5522)

### What this PR does / why we need it?

In the current process of implementing attention updates, the FIA
operator shares a single workspace among different layers within the
same computation graph. To enable memory reuse, we adopt the
weak_ref_tensor mechanism. However, this approach may lead to precision
anomalies in certain scenarios. To address this issue, different layers
in the same computation graph are assigned independent workspaces.

### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

- vLLM version: v0.13.0
- vLLM main:
vllm-project/vllm@45c1ca1

Signed-off-by: WithHades <244036962@qq.com>
ZRJ026 pushed a commit to ZRJ026/vllm-ascend that referenced this pull request Feb 28, 2026
… reuse of the workspace in certain scenarios (vllm-project#5522)

### What this PR does / why we need it?

In the current process of implementing attention updates, the FIA
operator shares a single workspace among different layers within the
same computation graph. To enable memory reuse, we adopt the
weak_ref_tensor mechanism. However, this approach may lead to precision
anomalies in certain scenarios. To address this issue, different layers
in the same computation graph are assigned independent workspaces.

### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

- vLLM version: v0.13.0
- vLLM main:
vllm-project/vllm@45c1ca1

Signed-off-by: WithHades <244036962@qq.com>
Signed-off-by: zrj026 <zhangrunjiang026@gmail.com>
maoxx241 pushed a commit to maoxx241/vllm-ascend that referenced this pull request Mar 2, 2026
… reuse of the workspace in certain scenarios (vllm-project#5522)

### What this PR does / why we need it?

In the current process of implementing attention updates, the FIA
operator shares a single workspace among different layers within the
same computation graph. To enable memory reuse, we adopt the
weak_ref_tensor mechanism. However, this approach may lead to precision
anomalies in certain scenarios. To address this issue, different layers
in the same computation graph are assigned independent workspaces.

### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

- vLLM version: v0.13.0
- vLLM main:
vllm-project/vllm@45c1ca1

Signed-off-by: WithHades <244036962@qq.com>
ZRJ026 pushed a commit to ZRJ026/vllm-ascend that referenced this pull request Mar 4, 2026
… reuse of the workspace in certain scenarios (vllm-project#5522)

### What this PR does / why we need it?

In the current process of implementing attention updates, the FIA
operator shares a single workspace among different layers within the
same computation graph. To enable memory reuse, we adopt the
weak_ref_tensor mechanism. However, this approach may lead to precision
anomalies in certain scenarios. To address this issue, different layers
in the same computation graph are assigned independent workspaces.

### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

- vLLM version: v0.13.0
- vLLM main:
vllm-project/vllm@45c1ca1

Signed-off-by: WithHades <244036962@qq.com>
Signed-off-by: zrj026 <zhangrunjiang026@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready read for review ready-for-test start test by label for PR

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants