Skip to content

Conversation

@rasmith
Copy link
Contributor

@rasmith rasmith commented Aug 5, 2025

This PR fixes an issue where wvSplitKQ is not being called when it should be when using a quantized FP8 model. This is because during compilation, this code path is not being used so does not get called during model execution even though it should be (e.g. batch size = 1).

I tested using Llama-3.1-8B-Instruct-FP8-KV.

Before this, this kernel was not being called at all when eager mode was not enforced.

Without the fix, and running with eager mode enforced:

-------------------------------------------------------  ---------
                                                    Name  # of Calls
 -------------------------------------------------------  ---------
 void wvSplitKQ_hf_sml_<__hip_bfloat16, c10::Float8_e...      32640

After applying fix and using profiler without eager mode enforced:

 -------------------------------------------------------  ---------
                                                    Name  # of Calls
 -------------------------------------------------------  ---------
 void wvSplitKQ_hf_sml_<__hip_bfloat16, c10::Float8_e...      32640

@github-actions
Copy link

github-actions bot commented Aug 5, 2025

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@mergify mergify bot added the rocm Related to AMD ROCm label Aug 5, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly refactors rocm_per_tensor_w8a8_scaled_mm to register a custom PyTorch operation. This ensures that the data-dependent control flow for dispatching to the wvSplitKQ kernel is correctly handled by torch.compile, which was the goal of this PR. The implementation is sound. However, I've identified a pre-existing critical issue where the bias term is dropped when the wvSplitKQ path is taken. I've included a suggested fix to ensure the bias is applied correctly in all cases.

@rasmith rasmith requested a review from yewentao256 as a code owner August 15, 2025 17:53
Copy link
Member

@yewentao256 yewentao256 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the work!
But in the PR description, seems without the fix and with the fix, the output is the same?

@rasmith
Copy link
Contributor Author

rasmith commented Aug 18, 2025

Thanks for the work! But in the PR description, seems without the fix and with the fix, the output is the same?

Without the fix, and running with eager mode enforced:

@SageMoore
Copy link
Contributor

Thanks for the work! But in the PR description, seems without the fix and with the fix, the output is the same?

Without the fix, and running with eager mode enforced:

I was also confused by this 🙂

@rasmith
Copy link
Contributor Author

rasmith commented Aug 19, 2025

Thanks for the work! But in the PR description, seems without the fix and with the fix, the output is the same?

Yes, the output is the same, we just want the function called on ROCm when it is supposed to be called.

@rasmith
Copy link
Contributor Author

rasmith commented Aug 20, 2025

Thanks for the work! But in the PR description, seems without the fix and with the fix, the output is the same?

Yes, the output is the same. We want the function to be called, which was happening before, but is not happening as it should be anymore, so this fixes the issue.

@mgoin mgoin added the ready ONLY add when PR is ready to merge/full CI is needed label Aug 22, 2025
@mgoin mgoin enabled auto-merge (squash) August 22, 2025 19:38
@mgoin mgoin merged commit cc7ae5e into vllm-project:main Aug 22, 2025
52 checks passed
epwalsh pushed a commit to epwalsh/vllm that referenced this pull request Aug 28, 2025
…ot being called when it should when using quantized FP8 model (vllm-project#22281)

Signed-off-by: Randall Smith <[email protected]>
xiao-llm pushed a commit to xiao-llm/vllm that referenced this pull request Aug 28, 2025
…ot being called when it should when using quantized FP8 model (vllm-project#22281)

Signed-off-by: Randall Smith <[email protected]>
Signed-off-by: Xiao Yu <[email protected]>
zhewenl pushed a commit to zhewenl/vllm that referenced this pull request Aug 28, 2025
…ot being called when it should when using quantized FP8 model (vllm-project#22281)

Signed-off-by: Randall Smith <[email protected]>
mengxingkongzhouhan pushed a commit to mengxingkongzhouhan/vllm that referenced this pull request Aug 30, 2025
…ot being called when it should when using quantized FP8 model (vllm-project#22281)

Signed-off-by: Randall Smith <[email protected]>
zhewenl pushed a commit to zhewenl/vllm that referenced this pull request Sep 3, 2025
…ot being called when it should when using quantized FP8 model (vllm-project#22281)

Signed-off-by: Randall Smith <[email protected]>
ekagra-ranjan pushed a commit to ekagra-ranjan/vllm that referenced this pull request Sep 4, 2025
…ot being called when it should when using quantized FP8 model (vllm-project#22281)

Signed-off-by: Randall Smith <[email protected]>
Signed-off-by: Ekagra Ranjan <[email protected]>
FeiDaLI pushed a commit to FeiDaLI/vllm that referenced this pull request Sep 25, 2025
…ot being called when it should when using quantized FP8 model (vllm-project#22281)

Signed-off-by: Randall Smith <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready ONLY add when PR is ready to merge/full CI is needed rocm Related to AMD ROCm

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants