Skip to content

Add CUTLASS FP8 MOE benchmark scripts and kernel config#25302

Merged
mgoin merged 2 commits intovllm-project:mainfrom
chenxi-yang:cutlass_kernel_eval_new
Sep 24, 2025
Merged

Add CUTLASS FP8 MOE benchmark scripts and kernel config#25302
mgoin merged 2 commits intovllm-project:mainfrom
chenxi-yang:cutlass_kernel_eval_new

Conversation

@chenxi-yang
Copy link
Contributor

@chenxi-yang chenxi-yang commented Sep 20, 2025

This commit introduces benchmarking infrastructure to compare the performance of CUTLASS FP8 MOE kernels against the existing Triton FP8 MOE implementation.

Key Features:

  • benchmark_cutlass_moe_fp8.py: Benchmark suite comparing CUTLASS vs Triton FP8 MOE kernels

Test Plan

python benchmark_cutlass_moe_fp8.py  \
            --model "Llama-4-Maverick-17B-128E-Instruct-FP8"  \
            --tp-sizes 8 \
            --batch-size 2 4 8  16 32 64 128 256 512 1024 2048 3072 4096 8192 16384\
            --per-act-token-opts false \
            --per-out-ch-opts false

Test Result

image

Signed-off-by: Chenxi Yang <cxyang@fb.com>
@chenxi-yang chenxi-yang requested a review from mgoin as a code owner September 20, 2025 05:12
@mergify mergify bot added the performance Performance-related issues label Sep 20, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces benchmarking scripts for CUTLASS FP8 MOE kernels. My review focuses on the new benchmark script benchmark_cutlass_moe_fp8.py. I've identified two critical issues in the benchmark logic that could lead to misleading or incorrect performance results. One issue is related to the incorrect handling of the per_out_ch quantization option, and the other is a silent override of the per_act_token option, which affects the validity of the comparison between CUTLASS and Triton kernels. These issues should be addressed to ensure the benchmark's correctness and reliability.

Comment on lines +120 to +121
# Force per-tensor quantization for all cases
per_act_token = False
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The per_act_token parameter is unconditionally overwritten to False on line 121. This is a critical issue for a benchmark script as it leads to incorrect results being reported for the per_act_token=True configuration. It also makes the comparison between Triton and CUTLASS unfair, as a workaround for a CUTLASS-specific issue is being applied to both kernels. The benchmark should not silently change the configuration being tested.

A better approach is to remove this global overwrite. If CUTLASS does not support per_act_token=True, you should handle it conditionally, for example by skipping the CUTLASS portion of the benchmark for that configuration and reporting its time as NaN, while still running the Triton benchmark correctly.

Comment on lines +92 to +98
# Per-channel quantization - not yet implemented properly
# For now, fall back to per-tensor quantization
w1_fp8q[expert], w1_scale_temp = ops.scaled_fp8_quant(w1[expert])
w2_fp8q[expert], w2_scale_temp = ops.scaled_fp8_quant(w2[expert])
# Expand scalar scales to the expected per-channel shape
w1_scale[expert] = w1_scale_temp.expand(2 * n, 1)
w2_scale[expert] = w2_scale_temp.expand(k, 1)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The comment at line 92 correctly states that per-channel quantization is not properly implemented. The current implementation falls back to per-tensor quantization, which is misleading for a benchmark as it doesn't perform true per-channel quantization. This can lead to incorrect performance results being reported for this configuration. It's better to explicitly fail for this configuration to avoid misleading results.

Suggested change
# Per-channel quantization - not yet implemented properly
# For now, fall back to per-tensor quantization
w1_fp8q[expert], w1_scale_temp = ops.scaled_fp8_quant(w1[expert])
w2_fp8q[expert], w2_scale_temp = ops.scaled_fp8_quant(w2[expert])
# Expand scalar scales to the expected per-channel shape
w1_scale[expert] = w1_scale_temp.expand(2 * n, 1)
w2_scale[expert] = w2_scale_temp.expand(k, 1)
raise NotImplementedError(
"Per-channel quantization is not yet implemented properly. "
"This benchmark configuration should be disabled.")

@mgoin mgoin added the ready ONLY add when PR is ready to merge/full CI is needed label Sep 20, 2025
@mgoin mgoin merged commit 0d235b8 into vllm-project:main Sep 24, 2025
56 checks passed
FeiDaLI pushed a commit to FeiDaLI/vllm that referenced this pull request Sep 25, 2025
…#25302)

Signed-off-by: Chenxi Yang <cxyang@fb.com>
Co-authored-by: Chenxi Yang <cxyang@fb.com>
yewentao256 pushed a commit that referenced this pull request Oct 3, 2025
Signed-off-by: Chenxi Yang <cxyang@fb.com>
Co-authored-by: Chenxi Yang <cxyang@fb.com>
Signed-off-by: yewentao256 <zhyanwentao@126.com>
choprahetarth pushed a commit to Tandemn-Labs/vllm that referenced this pull request Oct 11, 2025
…#25302)

Signed-off-by: Chenxi Yang <cxyang@fb.com>
Co-authored-by: Chenxi Yang <cxyang@fb.com>
lywa1998 pushed a commit to lywa1998/vllm that referenced this pull request Oct 20, 2025
…#25302)

Signed-off-by: Chenxi Yang <cxyang@fb.com>
Co-authored-by: Chenxi Yang <cxyang@fb.com>
rtourgeman pushed a commit to rtourgeman/vllm that referenced this pull request Nov 10, 2025
…#25302)

Signed-off-by: Chenxi Yang <cxyang@fb.com>
Co-authored-by: Chenxi Yang <cxyang@fb.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

performance Performance-related issues ready ONLY add when PR is ready to merge/full CI is needed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants