[Bugfix] fix the precision issues that may raise from the inter-layer reuse of the workspace in certain scenarios#5522
Conversation
There was a problem hiding this comment.
Code Review
This pull request addresses a precision issue related to workspace reuse in ACL graph capturing by modifying how workspace tensor references are managed. The change involves holding strong references to workspaces during graph capture to prevent premature garbage collection and then converting them to weak references afterward to conserve memory.
The overall approach is sound. However, I've found a critical issue in the new code that converts workspaces to weak references. It doesn't handle cases where graph parameters or specific workspaces are None, which can lead to a crash. I've provided a code suggestion to make this logic more robust and prevent potential runtime errors.
Additionally, for future clarity, it might be worth noting that the PR description states that 'different layers in the same computation graph are assigned independent workspaces', but the implementation appears to fix the issue by improving the lifecycle management of shared workspaces rather than making them independent. Aligning the description with the implementation could help future maintainers.
| for num_tokens in _graph_params.workspaces: | ||
| _graph_params.workspaces[num_tokens] = weak_ref_tensors( | ||
| _graph_params.workspaces[num_tokens]) | ||
| for num_tokens in _draft_graph_params.workspaces: | ||
| _draft_graph_params.workspaces[num_tokens] = weak_ref_tensors( | ||
| _draft_graph_params.workspaces[num_tokens]) |
There was a problem hiding this comment.
This block of code has two potential issues that could lead to a crash:
_graph_paramsor_draft_graph_paramscan beNone, which would cause anAttributeErrorwhen trying to access.workspaces.- The
workspacesdictionary is initialized withNonevalues. The loop iterates over all keys, so_graph_params.workspaces[num_tokens]could beNone. PassingNonetoweak_ref_tensorsraises aValueError.
I suggest refactoring this to be more robust and DRY by iterating over the parameter objects and checking for None at both levels.
for params in (_graph_params, _draft_graph_params):
if params is not None:
for num_tokens, workspace in params.workspaces.items():
if workspace is not None:
params.workspaces[num_tokens] = weak_ref_tensors(workspace)|
👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:
If CI fails, you can run linting and testing checks locally according Contributing and Testing. |
6ed2e7c to
68d2f05
Compare
yiz-liu
left a comment
There was a problem hiding this comment.
Please fix other attn impl as well like full_graph_fia or mla. @WithHades
Turns out that PA |
68d2f05 to
b942d5a
Compare
… reuse of the workspace in certain scenarios. Signed-off-by: WithHades <244036962@qq.com>
b942d5a to
38f732a
Compare
… reuse of the workspace in certain scenarios (vllm-project#5522) ### What this PR does / why we need it? In the current process of implementing attention updates, the FIA operator shares a single workspace among different layers within the same computation graph. To enable memory reuse, we adopt the weak_ref_tensor mechanism. However, this approach may lead to precision anomalies in certain scenarios. To address this issue, different layers in the same computation graph are assigned independent workspaces. ### Does this PR introduce _any_ user-facing change? ### How was this patch tested? - vLLM version: v0.13.0 - vLLM main: vllm-project/vllm@45c1ca1 Signed-off-by: WithHades <244036962@qq.com> Signed-off-by: wjunLu <wjunlu217@gmail.com>
… reuse of the workspace in certain scenarios (vllm-project#5522) ### What this PR does / why we need it? In the current process of implementing attention updates, the FIA operator shares a single workspace among different layers within the same computation graph. To enable memory reuse, we adopt the weak_ref_tensor mechanism. However, this approach may lead to precision anomalies in certain scenarios. To address this issue, different layers in the same computation graph are assigned independent workspaces. ### Does this PR introduce _any_ user-facing change? ### How was this patch tested? - vLLM version: v0.13.0 - vLLM main: vllm-project/vllm@45c1ca1 Signed-off-by: WithHades <244036962@qq.com>
… reuse of the workspace in certain scenarios (vllm-project#5522) ### What this PR does / why we need it? In the current process of implementing attention updates, the FIA operator shares a single workspace among different layers within the same computation graph. To enable memory reuse, we adopt the weak_ref_tensor mechanism. However, this approach may lead to precision anomalies in certain scenarios. To address this issue, different layers in the same computation graph are assigned independent workspaces. ### Does this PR introduce _any_ user-facing change? ### How was this patch tested? - vLLM version: v0.13.0 - vLLM main: vllm-project/vllm@45c1ca1 Signed-off-by: WithHades <244036962@qq.com> Signed-off-by: zrj026 <zhangrunjiang026@gmail.com>
… reuse of the workspace in certain scenarios (vllm-project#5522) ### What this PR does / why we need it? In the current process of implementing attention updates, the FIA operator shares a single workspace among different layers within the same computation graph. To enable memory reuse, we adopt the weak_ref_tensor mechanism. However, this approach may lead to precision anomalies in certain scenarios. To address this issue, different layers in the same computation graph are assigned independent workspaces. ### Does this PR introduce _any_ user-facing change? ### How was this patch tested? - vLLM version: v0.13.0 - vLLM main: vllm-project/vllm@45c1ca1 Signed-off-by: WithHades <244036962@qq.com>
… reuse of the workspace in certain scenarios (vllm-project#5522) ### What this PR does / why we need it? In the current process of implementing attention updates, the FIA operator shares a single workspace among different layers within the same computation graph. To enable memory reuse, we adopt the weak_ref_tensor mechanism. However, this approach may lead to precision anomalies in certain scenarios. To address this issue, different layers in the same computation graph are assigned independent workspaces. ### Does this PR introduce _any_ user-facing change? ### How was this patch tested? - vLLM version: v0.13.0 - vLLM main: vllm-project/vllm@45c1ca1 Signed-off-by: WithHades <244036962@qq.com> Signed-off-by: zrj026 <zhangrunjiang026@gmail.com>
What this PR does / why we need it?
In the current process of implementing attention updates, the FIA operator shares a single workspace among different layers within the same computation graph. To enable memory reuse, we adopt the weak_ref_tensor mechanism. However, this approach may lead to precision anomalies in certain scenarios. To address this issue, different layers in the same computation graph are assigned independent workspaces.
Does this PR introduce any user-facing change?
How was this patch tested?