[Bugfix] Revert "Zero-init MLA attention output buffers to prevent NaN from CUDA graph padding"#38359
Open
elvircrn wants to merge 3 commits intovllm-project:mainfrom
Open
[Bugfix] Revert "Zero-init MLA attention output buffers to prevent NaN from CUDA graph padding"#38359elvircrn wants to merge 3 commits intovllm-project:mainfrom
elvircrn wants to merge 3 commits intovllm-project:mainfrom
Conversation
…N from CUDA graph padding (vllm-project#37442)" This reverts commit ef2c4f7. The zero-init workaround is unnecessary — the NaN was caused by a different issue (int64 expert IDs in the routing simulator). Reverting to restore the original torch.empty allocation which avoids the overhead of pre-allocated zero-init buffers. Signed-off-by: Elvir Crncevic <elvircrn@gmail.com> Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Contributor
There was a problem hiding this comment.
Code Review
This pull request removes pre-allocated output buffers and simplifies tensor allocation logic across the CUTLASS and FlashInfer MLA backends. In cutlass_mla.py, the _decode_out buffer is replaced with a direct new_empty allocation, while in flashinfer_mla.py, the manual buffer management and padding zeroing workarounds in forward_mqa are removed. I have no feedback to provide.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
torch.emptyallocation, removing the overhead of pre-allocated zero-init buffers and theout=workaround in FlashInfer MLA.Test plan
🤖 Generated with Claude Code