Update flashinfer CUTLASS NVFP4 MoE Kernel to use per expert global scaling factor#21408
Conversation
|
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
There was a problem hiding this comment.
Code Review
This pull request updates the FlashInfer CUTLASS MoE kernel usage. The changes mostly involve renaming block_scale_interleave to nvfp4_block_scale_interleave to align with an API update. However, there's a potential issue in how activation scales are handled for FP4 MoE layers. A per-expert scale tensor is being incorrectly used to quantize the entire input activation tensor, which requires a single scalar scale. I've suggested a fix to restore the correct behavior.
There was a problem hiding this comment.
The change from torch.min(layer.w13_input_scale_quant) to layer.w13_input_scale_quant may be incorrect. The a1_gscale variable is used in extra_prepare_args and passed to FlashInferCutlassMoEPrepareAndFinalize.prepare. This method uses the scale to quantize the input activation tensor (hidden_states) before the tokens are routed to the experts. The input activation tensor has a shape of (num_tokens, hidden_dim). To quantize this tensor, a single scalar scale is required. The original code, torch.min(layer.w13_input_scale_quant), correctly produced this scalar by selecting the most conservative scale among all per-expert scales. The new code passes layer.w13_input_scale_quant, which is a tensor of per-expert scales with shape (num_experts,). Using per-expert scales to quantize the entire activation tensor before expert routing is logically incorrect and may cause a runtime error or incorrect results from flashinfer.fp4_quantize. Please revert this change to use torch.min to ensure a correct scalar scale is used for the initial input quantization.
| a1_gscale = layer.w13_input_scale_quant | |
| a2_gscale = layer.w2_input_scale_quant | |
| a1_gscale = torch.min(layer.w13_input_scale_quant) | |
| a2_gscale = torch.min(layer.w2_input_scale_quant) |
alexm-redhat
left a comment
There was a problem hiding this comment.
Verified locally that works, LGTM!
Signed-off-by: Shu Wang. <shuw@nvidia.com>
Head branch was pushed to by a user without write access
54ac27d to
f0b0755
Compare
Signed-off-by: Shu Wang. <shuw@nvidia.com>
Head branch was pushed to by a user without write access
Signed-off-by: Shu Wang. <shuw@nvidia.com> Signed-off-by: x22x22 <wadeking@qq.com>
Signed-off-by: Shu Wang. <shuw@nvidia.com>
Signed-off-by: Shu Wang. <shuw@nvidia.com>
Signed-off-by: Shu Wang. <shuw@nvidia.com> Signed-off-by: Jinzhen Lin <linjinzhen@hotmail.com>
Signed-off-by: Shu Wang. <shuw@nvidia.com> Signed-off-by: Paul Pak <paulpak58@gmail.com>
Signed-off-by: Shu Wang. <shuw@nvidia.com> Signed-off-by: Diego-Castan <diego.castan@ibm.com>
Signed-off-by: Shu Wang. <shuw@nvidia.com>
Essential Elements of an Effective PR Description Checklist
Before the change:
INFO:lm_eval.loggers.evaluation_tracker:Output path not provided, skipping saving results aggregated
vllm (pretrained=nvidia/DeepSeek-R1-FP4,quantization=modelopt_fp4,tensor_parallel_size=4,enforce_eager=True,max_model_len=2048,trust_remote_code=True), gen_kwargs: (None), limit: None, num_fewshot: 5, batch_size: auto
After:
INFO:lm_eval.loggers.evaluation_tracker:Output path not provided, skipping saving results aggregated
vllm (pretrained=nvidia/DeepSeek-R1-FP4,quantization=modelopt_fp4,tensor_parallel_size=4,enforce_eager=True,max_model_len=2048,trust_remote_code=True), gen_kwargs: (None), limit: None, num_fewshot: 5, batch_size: auto
Purpose
Test Plan
Test Result
(Optional) Documentation Update