Feature: Support Relu2 in FusedMoE fp8 cutlass path#27261
Feature: Support Relu2 in FusedMoE fp8 cutlass path#27261mgoin merged 13 commits intovllm-project:mainfrom
Conversation
fb950dd to
c595cbb
Compare
98b3df9 to
ed78454
Compare
|
This pull request has merge conflicts that must be resolved before it can be |
Signed-off-by: Amir Klein <203507526+amirkl94@users.noreply.github.com>
ed78454 to
ef94e83
Compare
Signed-off-by: Amir Klein <203507526+amirkl94@users.noreply.github.com>
Signed-off-by: Amir Klein <203507526+amirkl94@users.noreply.github.com>
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| self.cutlass_fp8_supported = cutlass_fp8_supported() | ||
| self.flashinfer_moe_backend: FlashinferMoeBackend | None = None | ||
| if ( | ||
| envs.VLLM_USE_FLASHINFER_MOE_FP8 | ||
| and has_flashinfer_moe() | ||
| and self.moe.is_act_and_mul | ||
| ): | ||
| if envs.VLLM_USE_FLASHINFER_MOE_FP8 and has_flashinfer_moe(): | ||
| self.flashinfer_moe_backend = get_flashinfer_moe_backend() | ||
| logger.info_once( |
There was a problem hiding this comment.
Avoid enabling TensorRT flashinfer for relu2 activations
Removing the self.moe.is_act_and_mul guard means a flashinfer backend is now enabled whenever VLLM_USE_FLASHINFER_MOE_FP8 is set, regardless of the activation. If the user selects the latency backend (TensorRT‑LLM) and runs a relu2_no_mul model, apply() will hit the hard assertion activation == "silu" and abort instead of falling back to the existing non‑flashinfer path, which previously worked (albeit slower). Consider only enabling flashinfer when either the model is gated or the chosen backend is CUTLASS; otherwise leave flashinfer_moe_backend as None so non‑gated models continue to run.
Useful? React with 👍 / 👎.
Signed-off-by: Amir Klein <203507526+amirkl94@users.noreply.github.com>
| g1_alphas=(layer.w13_weight_scale * layer.w13_input_scale).squeeze(), | ||
| g1_alphas=layer.output1_scales_gate_scalar.squeeze() |
There was a problem hiding this comment.
It looks like output1_scales_gate_scalar and output2_scales_scalar are only used in flashinfer_trtllm_moe. It's not clear from the vLLM source code how these are getting set. Does flashinfer add them to the layer's member variables?
I think we should try to keep the quantization format decoupled from the kernels used for the implementation
There was a problem hiding this comment.
These factors are registered into the layer during process_weights_after_loading (line 566) .
They are registered to the layer when we're using cutlass or trtllm backends, using register_moe_scaling_factors .
The problem I see with trying to make this decouple is that the cutlass path uses the layers' get_fused_moe_quant_config to get the relevant scaling factors here .
There are 2 options I see here let me know which one you prefer:
- I can remove the
if elsewhen setting the quantization and set it the same for all paths. - I can build the needed quantization for the flashinfer cutlass kernel in here , and not change the quantization in the ModelOptFusedMoEFP8 object.
There was a problem hiding this comment.
I can remove the if else when setting the quantization and set it the same for all paths.
I think I'd prefer to just provide the same information in all cases. The kernel can choose to ignore it
| if ( | ||
| self.flashinfer_moe_backend == FlashinferMoeBackend.TENSORRT_LLM | ||
| and not self.moe.is_act_and_mul | ||
| ): | ||
| logger.info_once( | ||
| "Non-gated MoE is not supported for min-latency mode," | ||
| "falling back to high-throughput mode" | ||
| ) |
There was a problem hiding this comment.
It seems you are missing the override of self.flashinfer_moe_backend here
| g1_alphas=(layer.w13_weight_scale * layer.w13_input_scale).squeeze(), | ||
| g1_alphas=layer.output1_scales_gate_scalar.squeeze() |
There was a problem hiding this comment.
I can remove the if else when setting the quantization and set it the same for all paths.
I think I'd prefer to just provide the same information in all cases. The kernel can choose to ignore it
| if ( | ||
| self.flashinfer_moe_backend == FlashinferMoeBackend.TENSORRT_LLM | ||
| and not self.moe.is_act_and_mul | ||
| ): | ||
| logger.info_once( | ||
| "Non-gated MoE is not supported for min-latency mode," | ||
| "falling back to high-throughput mode" | ||
| ) |
There was a problem hiding this comment.
It looks like the self.flashinfer_moe_backend override was left out
Signed-off-by: Michael Goin <mgoin64@gmail.com>
mgoin
left a comment
There was a problem hiding this comment.
PTAL at the failing blackwell test
tests/kernels/moe/test_flashinfer.py
Outdated
| if activation != "relu2_no_mul": | ||
| is_gated = False |
There was a problem hiding this comment.
This doesn't seem right as it breaks test_flashinfer_per_tensor_moe_fp8_no_graph on blackwell
https://buildkite.com/vllm/ci/builds/38920/steps/canvas?jid=019a7fad-270b-4d40-8820-e3a1e75dc35e#019a7fad-270b-4d40-8820-e3a1e75dc35e/102-2387
There was a problem hiding this comment.
Yeah it should be if activation == "relu2_no_mul: , I originally wrote it as a one liner but the pre-commit hook complained, and I fixed it incorrectly. Changed it back and this should be correct.
Signed-off-by: Amir Klein <203507526+amirkl94@users.noreply.github.com>
|
This pull request has merge conflicts that must be resolved before it can be |
Signed-off-by: Amir Klein <203507526+amirkl94@users.noreply.github.com>
Purpose
This PR enables FusedMoE FP8 cutlass path for models using non-gated relu2 models. This new path gives a performance gain of around 20% on output token throughput over the triton path.
This PR requires flashinfer
0.5.0.Tests
New parameterization in
test_flashinfer.py::test_flashinfer_cutlass_moe_fp8_no_graphto verify the new non-gated activation.Performance tests:
Ran on a single H100. Started the server (once with
VLLM_USE_FLASHINFER_MOE_FP8=1and once withVLLM_USE_FLASHINFER_MOE_FP8=0):Benchmarked using:
triton path yielded:
cutlass path yielded:
~17% perf gain for peak (decode), ~40% perf gain for average token throughput (includes prefill).