Add CUTLASS FP8 MOE benchmark scripts and kernel config#25302
Add CUTLASS FP8 MOE benchmark scripts and kernel config#25302mgoin merged 2 commits intovllm-project:mainfrom
Conversation
Signed-off-by: Chenxi Yang <cxyang@fb.com>
There was a problem hiding this comment.
Code Review
This pull request introduces benchmarking scripts for CUTLASS FP8 MOE kernels. My review focuses on the new benchmark script benchmark_cutlass_moe_fp8.py. I've identified two critical issues in the benchmark logic that could lead to misleading or incorrect performance results. One issue is related to the incorrect handling of the per_out_ch quantization option, and the other is a silent override of the per_act_token option, which affects the validity of the comparison between CUTLASS and Triton kernels. These issues should be addressed to ensure the benchmark's correctness and reliability.
| # Force per-tensor quantization for all cases | ||
| per_act_token = False |
There was a problem hiding this comment.
The per_act_token parameter is unconditionally overwritten to False on line 121. This is a critical issue for a benchmark script as it leads to incorrect results being reported for the per_act_token=True configuration. It also makes the comparison between Triton and CUTLASS unfair, as a workaround for a CUTLASS-specific issue is being applied to both kernels. The benchmark should not silently change the configuration being tested.
A better approach is to remove this global overwrite. If CUTLASS does not support per_act_token=True, you should handle it conditionally, for example by skipping the CUTLASS portion of the benchmark for that configuration and reporting its time as NaN, while still running the Triton benchmark correctly.
| # Per-channel quantization - not yet implemented properly | ||
| # For now, fall back to per-tensor quantization | ||
| w1_fp8q[expert], w1_scale_temp = ops.scaled_fp8_quant(w1[expert]) | ||
| w2_fp8q[expert], w2_scale_temp = ops.scaled_fp8_quant(w2[expert]) | ||
| # Expand scalar scales to the expected per-channel shape | ||
| w1_scale[expert] = w1_scale_temp.expand(2 * n, 1) | ||
| w2_scale[expert] = w2_scale_temp.expand(k, 1) |
There was a problem hiding this comment.
The comment at line 92 correctly states that per-channel quantization is not properly implemented. The current implementation falls back to per-tensor quantization, which is misleading for a benchmark as it doesn't perform true per-channel quantization. This can lead to incorrect performance results being reported for this configuration. It's better to explicitly fail for this configuration to avoid misleading results.
| # Per-channel quantization - not yet implemented properly | |
| # For now, fall back to per-tensor quantization | |
| w1_fp8q[expert], w1_scale_temp = ops.scaled_fp8_quant(w1[expert]) | |
| w2_fp8q[expert], w2_scale_temp = ops.scaled_fp8_quant(w2[expert]) | |
| # Expand scalar scales to the expected per-channel shape | |
| w1_scale[expert] = w1_scale_temp.expand(2 * n, 1) | |
| w2_scale[expert] = w2_scale_temp.expand(k, 1) | |
| raise NotImplementedError( | |
| "Per-channel quantization is not yet implemented properly. " | |
| "This benchmark configuration should be disabled.") |
…#25302) Signed-off-by: Chenxi Yang <cxyang@fb.com> Co-authored-by: Chenxi Yang <cxyang@fb.com>
Signed-off-by: Chenxi Yang <cxyang@fb.com> Co-authored-by: Chenxi Yang <cxyang@fb.com> Signed-off-by: yewentao256 <zhyanwentao@126.com>
…#25302) Signed-off-by: Chenxi Yang <cxyang@fb.com> Co-authored-by: Chenxi Yang <cxyang@fb.com>
…#25302) Signed-off-by: Chenxi Yang <cxyang@fb.com> Co-authored-by: Chenxi Yang <cxyang@fb.com>
…#25302) Signed-off-by: Chenxi Yang <cxyang@fb.com> Co-authored-by: Chenxi Yang <cxyang@fb.com>
This commit introduces benchmarking infrastructure to compare the performance of CUTLASS FP8 MOE kernels against the existing Triton FP8 MOE implementation.
Key Features:
benchmark_cutlass_moe_fp8.py: Benchmark suite comparing CUTLASS vs Triton FP8 MOE kernelsTest Plan
Test Result