[Performance] Fused blockwise quant RMS norm#27883
Merged
ProExpertProg merged 34 commits intovllm-project:mainfrom Dec 7, 2025
Merged
[Performance] Fused blockwise quant RMS norm#27883ProExpertProg merged 34 commits intovllm-project:mainfrom
ProExpertProg merged 34 commits intovllm-project:mainfrom
Conversation
Signed-off-by: ElizaWszola <ewszola@redhat.com>
…or int8 Signed-off-by: ElizaWszola <ewszola@redhat.com>
Signed-off-by: ElizaWszola <ewszola@redhat.com>
Signed-off-by: ElizaWszola <ewszola@redhat.com>
Signed-off-by: ElizaWszola <ewszola@redhat.com>
Signed-off-by: ElizaWszola <ewszola@redhat.com>
Member
|
The optimization of this commit is beneficial: [-------------------------------------------- rms-norm-dynamic-per-token-quant --------------------------------------------]
| unfused_groupwise_fp8_impl | fused_groupwise_fp8_impl
1 threads: -----------------------------------------------------------------------------------------------------------------
N 1 x D 1024 x R True x DT torch.bfloat16x GS [1, 128] | 31.4 | 29.4
N 1 x D 1024 x R False x DT torch.bfloat16x GS [1, 128] | 34.0 | 30.4
N 1 x D 5120 x R True x DT torch.bfloat16x GS [1, 128] | 31.3 | 29.6
N 1 x D 5120 x R False x DT torch.bfloat16x GS [1, 128] | 34.0 | 29.5
N 4 x D 1024 x R True x DT torch.bfloat16x GS [1, 128] | 30.1 | 29.5
N 4 x D 1024 x R False x DT torch.bfloat16x GS [1, 128] | 35.1 | 31.2
N 4 x D 5120 x R True x DT torch.bfloat16x GS [1, 128] | 32.4 | 32.5
N 4 x D 5120 x R False x DT torch.bfloat16x GS [1, 128] | 36.1 | 30.7
N 16 x D 1024 x R True x DT torch.bfloat16x GS [1, 128] | 31.6 | 31.4
N 16 x D 1024 x R False x DT torch.bfloat16x GS [1, 128] | 35.2 | 32.3
N 16 x D 5120 x R True x DT torch.bfloat16x GS [1, 128] | 32.8 | 32.2
N 16 x D 5120 x R False x DT torch.bfloat16x GS [1, 128] | 35.1 | 31.6
N 64 x D 1024 x R True x DT torch.bfloat16x GS [1, 128] | 31.8 | 31.5
N 64 x D 1024 x R False x DT torch.bfloat16x GS [1, 128] | 35.2 | 32.7
N 64 x D 5120 x R True x DT torch.bfloat16x GS [1, 128] | 31.8 | 31.6
N 64 x D 5120 x R False x DT torch.bfloat16x GS [1, 128] | 36.1 | 32.1
N 256 x D 1024 x R True x DT torch.bfloat16x GS [1, 128] | 32.8 | 32.3
N 256 x D 1024 x R False x DT torch.bfloat16x GS [1, 128] | 36.1 | 32.0
N 256 x D 5120 x R True x DT torch.bfloat16x GS [1, 128] | 32.6 | 32.3
N 256 x D 5120 x R False x DT torch.bfloat16x GS [1, 128] | 35.2 | 31.5
N 1024 x D 1024 x R True x DT torch.bfloat16x GS [1, 128] | 31.4 | 39.0
N 1024 x D 1024 x R False x DT torch.bfloat16x GS [1, 128] | 35.1 | 36.9
N 1024 x D 5120 x R True x DT torch.bfloat16x GS [1, 128] | 31.8 | 53.3
N 1024 x D 5120 x R False x DT torch.bfloat16x GS [1, 128] | 35.5 | 49.3 now [-------------------------------------------- rms-norm-dynamic-per-token-quant --------------------------------------------]
| unfused_groupwise_fp8_impl | fused_groupwise_fp8_impl
1 threads: -----------------------------------------------------------------------------------------------------------------
N 1 x D 1024 x R True x DT torch.bfloat16x GS [1, 128] | 30.9 | 19.6
N 1 x D 1024 x R False x DT torch.bfloat16x GS [1, 128] | 36.5 | 19.4
N 1 x D 5120 x R True x DT torch.bfloat16x GS [1, 128] | 30.5 | 19.6
N 1 x D 5120 x R False x DT torch.bfloat16x GS [1, 128] | 36.5 | 19.6
N 4 x D 1024 x R True x DT torch.bfloat16x GS [1, 128] | 30.4 | 19.5
N 4 x D 1024 x R False x DT torch.bfloat16x GS [1, 128] | 34.2 | 19.3
N 4 x D 5120 x R True x DT torch.bfloat16x GS [1, 128] | 30.5 | 19.6
N 4 x D 5120 x R False x DT torch.bfloat16x GS [1, 128] | 34.2 | 19.4
N 16 x D 1024 x R True x DT torch.bfloat16x GS [1, 128] | 31.8 | 19.6
N 16 x D 1024 x R False x DT torch.bfloat16x GS [1, 128] | 36.4 | 19.5
N 16 x D 5120 x R True x DT torch.bfloat16x GS [1, 128] | 30.7 | 19.7
N 16 x D 5120 x R False x DT torch.bfloat16x GS [1, 128] | 36.5 | 19.7
N 64 x D 1024 x R True x DT torch.bfloat16x GS [1, 128] | 31.8 | 19.7
N 64 x D 1024 x R False x DT torch.bfloat16x GS [1, 128] | 36.5 | 19.6
N 64 x D 5120 x R True x DT torch.bfloat16x GS [1, 128] | 30.4 | 19.6
N 64 x D 5120 x R False x DT torch.bfloat16x GS [1, 128] | 34.3 | 19.5
N 256 x D 1024 x R True x DT torch.bfloat16x GS [1, 128] | 30.1 | 19.4
N 256 x D 1024 x R False x DT torch.bfloat16x GS [1, 128] | 34.4 | 19.8
N 256 x D 5120 x R True x DT torch.bfloat16x GS [1, 128] | 30.7 | 19.6
N 256 x D 5120 x R False x DT torch.bfloat16x GS [1, 128] | 34.2 | 19.5
N 1024 x D 1024 x R True x DT torch.bfloat16x GS [1, 128] | 30.7 | 19.4
N 1024 x D 1024 x R False x DT torch.bfloat16x GS [1, 128] | 34.4 | 19.4
N 1024 x D 5120 x R True x DT torch.bfloat16x GS [1, 128] | 30.7 | 28.7
N 1024 x D 5120 x R False x DT torch.bfloat16x GS [1, 128] | 34.5 | 28.7 |
|
This pull request has merge conflicts that must be resolved before it can be |
Signed-off-by: ElizaWszola <ewszola@redhat.com>
…agic/vllm into blockwise-quant-rms-norm Signed-off-by: ElizaWszola <ewszola@redhat.com>
Signed-off-by: ElizaWszola <ewszola@redhat.com>
Signed-off-by: ElizaWszola <ewszola@redhat.com>
Signed-off-by: ElizaWszola <ewszola@redhat.com>
Signed-off-by: ElizaWszola <ewszola@redhat.com>
Signed-off-by: ElizaWszola <ewszola@redhat.com>
Signed-off-by: ElizaWszola <ewszola@redhat.com>
Contributor
Author
|
Figured it out now, pushed the fix :) |
Signed-off-by: ElizaWszola <ewszola@redhat.com>
Signed-off-by: ElizaWszola <ewszola@redhat.com>
ProExpertProg
approved these changes
Dec 5, 2025
e4aa624 to
f4a206c
Compare
penfree
pushed a commit
to penfree/vllm
that referenced
this pull request
Dec 8, 2025
Signed-off-by: ElizaWszola <ewszola@redhat.com> Signed-off-by: yewentao256 <zhyanwentao@126.com> Co-authored-by: yewentao256 <zhyanwentao@126.com>
Contributor
|
After this PR, Qwen3 VLs (and most likely other FP8 VLMs I guess) are failing with the following error: which is raised at |
yeqcharlotte
added a commit
to yeqcharlotte/vllm
that referenced
this pull request
Dec 8, 2025
Summary: Fix AMD compilation failure for DeepSeek models introduced in vllm-project#27883. The issue was that RMSNormQuantFusionPass unconditionally creates FusedAddRMSNormGroupQuantPattern and RMSNormGroupQuantPattern for group quantization (GroupShape 64 and 128), but the underlying C++ operation per_token_group_fp8_quant is only available on CUDA (wrapped in #ifndef USE_ROCM in torch_bindings.cpp). On AMD platforms, this caused an assertion failure: AssertionError: unsupported quantization scheme QuantKey(f8e4m3fnuz,scale(f32,dynamic,GroupShape(row=1, col=128)),symmetric) The fix guards the creation of group quant patterns with current_platform.is_cuda(), matching the guard used for registering these keys in QUANT_OPS. Test Plan: Waiting for this deepseek job on amd to complete: https://www.internalfb.com/vanguard/serving_test_cases/1967790977283741 Will also wait for external CI Differential Revision: D88608586 Privacy Context Container: L1370295
Contributor
Author
dsuhinin
pushed a commit
to dsuhinin/vllm
that referenced
this pull request
Jan 21, 2026
Signed-off-by: ElizaWszola <ewszola@redhat.com> Signed-off-by: yewentao256 <zhyanwentao@126.com> Co-authored-by: yewentao256 <zhyanwentao@126.com> Signed-off-by: dsuhinin <suhinin.dmitriy@gmail.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
CUDA kernel and fusion code for Fused Groupwise FP8-Quantized RMS Norm. This code allows to fuse RMS Norm + FP8 Quantization of the RMS Norm's output when
enable_fusion==True.Testing:
Test fused op
pytest tests/kernels/core/test_fused_quant_layernorm.pyTest fusion
pytest tests/compile/test_fusion.py(tested with both
VLLM_USE_DEEP_GEMM=1andVLLM_USE_DEEP_GEMM=0)Offline inference
Run with
(tested with both
VLLM_USE_DEEP_GEMM=1andVLLM_USE_DEEP_GEMM=0, verified that the fused kernel is being produced)Benchmarking:
Microbenchmark isolated op:
python benchmarks/fused_kernels/layernorm_rms_benchmarks.pyResults on H100 (click to show)
Results of E2E sonnet benchmark of
Qwen/Qwen3-30B-A3B-FP8compared to main (H100):