Skip to content

[perf]optimize nvfp4 #2267

Closed
Bruce-x-1997 wants to merge 4 commits intoflashinfer-ai:mainfrom
Bruce-x-1997:bruce_optimize_nvfp4
Closed

[perf]optimize nvfp4 #2267
Bruce-x-1997 wants to merge 4 commits intoflashinfer-ai:mainfrom
Bruce-x-1997:bruce_optimize_nvfp4

Conversation

@Bruce-x-1997
Copy link
Copy Markdown
Contributor

@Bruce-x-1997 Bruce-x-1997 commented Dec 25, 2025

📌 Description

I find the nvfp4 implemantation could only 1.3-1.4x speedup compared to fp8 in deepseek-v3-0324 model .
and as the fp4 pflops is twice that of fp8, I think there should be some points that could be optimization

🚀 Pull Request Checklist

Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete.

✅ Pre-commit Checks

  • I have installed pre-commit by running pip install pre-commit (or used your preferred method).
  • I have installed the hooks with pre-commit install.
  • I have run the hooks manually with pre-commit run --all-files and fixed any reported issues.

If you are unsure about how to set up pre-commit, see the pre-commit documentation.

🧪 Tests

  • Tests have been added or updated as needed.
  • All tests are passing (unittest, etc.).

Reviewer Notes

Summary by CodeRabbit

  • New Features

    • Added new MOE configuration parameter with default value of 8.
  • Deprecations

    • The new MOE parameter will be unsupported after v0.5.0.
  • Improvements

    • Optimized FP4 quantization path for improved register usage and performance.
    • Enhanced device-side conversion utilities for better efficiency.
  • Version

    • Updated to v0.5.3.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Dec 25, 2025

📝 Walkthrough

Walkthrough

This PR introduces a new tile_tokens_dim parameter to MOE functions with deprecation warnings, threading it through benchmarks and tests. Core computation logic is unchanged. FP4 quantization in allreduce fusion undergoes optimization to reduce register pressure. Version bumped to 0.5.3.

Changes

Cohort / File(s) Summary
MOE Core API
flashinfer/fused_moe/core.py
Added optional tile_tokens_dim parameter to five MOE functions (trtllm_fp8_per_tensor_scale_moe, trtllm_fp8_block_scale_moe, trtllm_fp4_block_scale_moe, trtllm_fp4_block_scale_routed_moe) with deprecation warning indicating unsupport after v0.5.0. Parameter does not affect computation.
Benchmark Threading
benchmarks/README.md, benchmarks/bench_trtllm_gen_fused_moe_autotuner.py, benchmarks/routines/flashinfer_benchmark_utils.py, benchmarks/routines/moe.py
Added tile_tokens_dim (default 8) as CLI argument and threaded through FP4/FP8 test paths. Includes heuristic override for BlockMajorK with shuffled weights in FP8 BlockScale. Extended output metrics to include tile_tokens_dim.
Test Configuration
benchmarks/samples/sample_testlist_output.txt
Added tile_tokens_dim=8 to multiple test Namespace configurations (trtllm_fp4_block_scale_moe, trtllm_fp8_block_scale_moe, trtllm_fp8_per_tensor_scale_moe, cutlass_fused_moe). Updated routing method for one test from 'deepseek_v3' to 'renormalize'.
Test Call Sites
tests/moe/test_trtllm_gen_fused_moe.py, tests/moe/test_trtllm_gen_routed_fused_moe.py
Updated CUDA kernel calls to pass tile_tokens_dim=None argument to trtllm_fp4_block_scale_moe, trtllm_fp8_per_tensor_scale_moe, and trtllm_fp4_block_scale_routed_moe.
Kernel Implementation Notes
csrc/trtllm_fused_moe_kernel_launcher.cu
Added comments noting that tile_N is passed where tile_tokens_dim is expected in three launcher init calls. Indicates pre-existing parameter mismatch; no functional change.
Allreduce Fusion Optimization
include/flashinfer/comm/trtllm_allreduce_fusion.cuh
Introduced fp32_pair_to_e2m1() device helper for float-pair to e2m1 conversion. Optimized FP4 quantization path: replaced SFValue computation with direct quantized_sf flow. Streamlined FP32 accumulation with per-element scalar accumulation. Added FP4-specific block-size selection logic (160/192/128 threads).
Version
version.txt
Bumped version from 0.5.2 to 0.5.3.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly related PRs

Suggested reviewers

  • djmmoss
  • cyx-6
  • ttyio
  • bkryu
  • joker-eph
  • yongwww

Poem

🐰 A new tile_tokens_dim hops in with a warning clear,
"Deprecate by v0.5.0, friends dear!"
Register pressure eases in the fusion light,
Float2 pairs pack e2m1 so tight.
Threaded through benchmarks, tests all pass—
Version bumps, optimizations for the grass! 🌿✨

Pre-merge checks and finishing touches

❌ Failed checks (1 warning, 1 inconclusive)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 55.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
Description check ❓ Inconclusive The description includes a brief explanation of the motivation (nvfp4 only provides 1.3-1.4x speedup vs fp8 despite 2x peak FLOPS), but largely unchecked boxes and incomplete sections per the template. Complete the pre-commit checks section, confirm all tests pass, and provide specific reviewer notes about which optimizations were targeted and expected performance improvements.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title '[perf]optimize nvfp4' is concise and directly relates to the main change (performance optimization for nvfp4), though it could be more descriptive about what specific optimization or the scope of changes.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @Bruce-x-1997, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request aims to significantly enhance the performance of NVIDIA FP4 (nvfp4) and FP8 Mixture-of-Experts (MoE) implementations, addressing an observed gap where FP4's speedup over FP8 was less than its theoretical potential. The changes introduce a new tile_tokens_dim parameter for experimental tuning of token processing, alongside several critical low-level CUDA kernel optimizations. These optimizations focus on reducing register pressure, streamlining floating-point conversions, and dynamically adjusting kernel launch configurations to maximize GPU occupancy and throughput for FP4 operations.

Highlights

  • New tile_tokens_dim Parameter: Introduced a new tile_tokens_dim argument across various Mixture-of-Experts (MoE) functions and benchmarks to allow for fine-grained control over token tiling, aiming to improve performance.
  • CUDA Kernel Optimizations: Implemented several low-level optimizations in CUDA kernels, including a new fp32_pair_to_e2m1 conversion function, improved scale factor calculation, and reduced register usage in cvt_warp_fp16_to_fp4 and allreduce_sum.
  • FP4-Specific Block Size Tuning: Added logic to dynamically adjust CUDA block sizes (e.g., to 160, 192, or 128) specifically for FP4 operations within the allreduce_fusion_kernel_launcher to enhance GPU occupancy and performance.
  • Deprecation Warning for tile_tokens_dim: Added warnings for the tile_tokens_dim parameter in Python API functions, indicating its planned deprecation in a future release (v0.5.0), suggesting it's a temporary tuning knob.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces performance optimizations for nvfp4, primarily within trtllm_allreduce_fusion.cuh. The changes focus on reducing register pressure and tuning launch configurations, which are well-implemented and should yield performance gains. A new tile_tokens_dim parameter is also added for MoE benchmarks and kernels, with deprecation warnings for its use in fp8 kernels. The overall changes are consistent and improve the codebase. I have one suggestion to refactor a piece of logic for better readability.

Comment on lines +1433 to +1452
if constexpr (GetQuantType<Pattern> == QuantType::kFP4) {
// Try to use 160 as block_size if possible (better occupancy for FP4)
if (threads_per_token % 160 == 0 && 160 <= max_threads_per_block && 160 >= 128) {
block_size = 160;
cluster_size = threads_per_token / 160;
if (cluster_size > 8) cluster_size = 8;
}
// Fallback: try 192, 128 if 160 doesn't work
else if (threads_per_token % 192 == 0 && 192 <= max_threads_per_block && 192 >= 128) {
block_size = 192;
cluster_size = threads_per_token / 192;
if (cluster_size > 8) cluster_size = 8;
} else if (threads_per_token % 128 == 0 && 128 <= max_threads_per_block) {
block_size = 128;
cluster_size = threads_per_token / 128;
if (cluster_size > 8) cluster_size = 8;
}
// Update threads_per_block to match block_size for SM count check
threads_per_block = block_size;
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The logic for selecting block_size for FP4 kernels can be simplified for better readability and maintainability. The conditions 160 >= 128 and 192 >= 128 are always true and can be removed. Also, the logic for capping cluster_size is repeated. Consider refactoring this block to reduce redundancy.

  if constexpr (GetQuantType<Pattern> == QuantType::kFP4) {
    int new_block_size = 0;
    // Try to use 160 as block_size if possible (better occupancy for FP4)
    if (threads_per_token % 160 == 0 && 160 <= max_threads_per_block) {
      new_block_size = 160;
    }
    // Fallback: try 192, 128 if 160 doesn't work
    else if (threads_per_token % 192 == 0 && 192 <= max_threads_per_block) {
      new_block_size = 192;
    } else if (threads_per_token % 128 == 0 && 128 <= max_threads_per_block) {
      new_block_size = 128;
    }

    if (new_block_size > 0) {
      block_size = new_block_size;
      cluster_size = threads_per_token / new_block_size;
      if (cluster_size > 8) {
        cluster_size = 8;
      }
    }
    // Update threads_per_block to match block_size for SM count check
    threads_per_block = block_size;
  }

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
flashinfer/fused_moe/core.py (1)

2106-2138: FP4 wrappers missing default for tile_tokens_dim breaks backward compatibility

Both trtllm_fp4_block_scale_moe and trtllm_fp4_block_scale_routed_moe declare tile_tokens_dim: Optional[int] without a default value, positioned between routed_scaling_factor and routing_method_type. The FP8 counterpart (trtllm_fp8_block_scale_moe) defaults tile_tokens_dim to None, remaining backward-compatible.

This breaks existing code using positional arguments through routing_method_type, causing a TypeError for missing required argument tile_tokens_dim. Add = None to both FP4 function signatures to match the FP8 pattern and preserve backward compatibility.

🧹 Nitpick comments (4)
tests/moe/test_trtllm_gen_fused_moe.py (1)

185-215: Tile_tokens_dim wiring in test MoE calls looks correct; consider using keyword for FP8 paths

The added tile_tokens_dim=None for the FP4 graph path and the inserted None positional arguments for the FP8 block-scale and FP8 per‑tensor paths match the updated Python wrappers in flashinfer.fused_moe.core (the new parameter is between routed_scaling_factor and routing_method_type). Behavior remains unchanged because tile_tokens_dim is None and is ignored by the core.

For long‑term maintainability, you may want to pass tile_tokens_dim by keyword in the FP8 calls as you already do for the FP4 path; that would make these tests more robust to any future reordering of the wrapper’s trailing parameters.

Also applies to: 771-797, 947-973

benchmarks/routines/moe.py (2)

119-125: CLI tile_tokens_dim threading is coherent, but currently acts as a metadata knob only

The new --tile_tokens_dim argument is parsed, threaded into all three TRT‑LLM MoE benchmarks, and written into the result dicts. The call‑sites correctly pass it into the updated Python wrappers (trtllm_fp4_block_scale_moe, trtllm_fp8_block_scale_moe, trtllm_fp8_per_tensor_scale_moe) in the right positional/keyword slots, so everything is consistent within this repo.

However, the current core wrappers only use tile_tokens_dim to gate a one‑time deprecation warning and do not forward its value into the underlying C++ runner, so changing --tile_tokens_dim does not actually influence the kernel configuration yet. From a benchmark‑user perspective this behaves more like an informational field than a tuning knob.

If the intent is:

  • Just compatibility / logging: consider documenting here (or in help text) that the flag is deprecated and ignored by the kernel, and is present only for backward‑compat / reporting.
  • Real tuning in future: once you wire tile_tokens_dim through flashinfer.fused_moe.core into the C++ launcher, this plumbing should already be in the right place; at that point you may also want to validate that the provided value is within the supported tile set.

Also applies to: 563-564, 682-713, 765-784, 1188-1189, 1323-1324, 1384-1385, 1451-1452, 1530-1531, 1588-1588


1280-1300: BlockMajorK heuristic for tile_tokens_dim is reasonable, but please clarify intent

The BlockMajorK override:

  • Computes tokens_per_expert ≈ (num_tokens * top_k) / local_num_experts with sensible guards.
  • Rounds to next power of two, then clamps to [8, 64].
  • Logs when overriding a user‑supplied value.

That’s a sane heuristic and matches the idea of choosing a tile size proportional to tokens per expert. Given that tile_tokens_dim is currently ignored by the core (other than logging a warning), this override only affects the metadata recorded in results.

If you plan to make tile_tokens_dim drive the actual kernel selection later, this heuristic is a good starting point, but you may want to:

  • Revisit the [8, 64] clamp against the set of tile sizes supported by the C++ runner.
  • Document in the CLI help (and possibly here) that BlockMajorK may override the requested tile for better alignment with kernel constraints.
flashinfer/fused_moe/core.py (1)

1940-1997: Tile_tokens_dim deprecation handling is consistent but currently discards the knob entirely

The four high‑level Python wrappers:

  • trtllm_fp8_per_tensor_scale_moe
  • trtllm_fp8_block_scale_moe
  • trtllm_fp4_block_scale_moe
  • trtllm_fp4_block_scale_routed_moe

now all accept a tile_tokens_dim parameter and emit a one‑time deprecation warning when it is not None. However, none of them forward this argument into the underlying TRT‑LLM custom ops; it is only used to decide whether to log a warning, and then discarded.

Given the rest of this PR adds CLI and benchmark wiring plus heuristics around tile_tokens_dim, it’s worth making the intent explicit:

  • If the goal is pure deprecation / backward compatibility, this is fine functionally, but:
    • Consider adjusting the warning text (“will no longer be supported after v0.5.0”) to match the current versioning story (we’re already past 0.5.0) or to use a vaguer “in a future release”.
    • It might also help to mention explicitly in the docstrings that the parameter is ignored and exists only for compatibility, to avoid users trying to tune it.
  • If, instead, you eventually want tile_tokens_dim to control tile_N in the C++ runner, you’ll need a follow‑up change that threads this value into the appropriate C++ init APIs and/or configuration structures so it actually affects kernel selection.

Right now there is no functional effect from any non‑None value, beyond triggering the warning.

Also applies to: 2021-2077, 2106-2183, 2243-2322

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 421433e and f8d8c90.

⛔ Files ignored due to path filters (1)
  • benchmarks/samples/sample_testlist_output.csv is excluded by !**/*.csv
📒 Files selected for processing (11)
  • benchmarks/README.md
  • benchmarks/bench_trtllm_gen_fused_moe_autotuner.py
  • benchmarks/routines/flashinfer_benchmark_utils.py
  • benchmarks/routines/moe.py
  • benchmarks/samples/sample_testlist_output.txt
  • csrc/trtllm_fused_moe_kernel_launcher.cu
  • flashinfer/fused_moe/core.py
  • include/flashinfer/comm/trtllm_allreduce_fusion.cuh
  • tests/moe/test_trtllm_gen_fused_moe.py
  • tests/moe/test_trtllm_gen_routed_fused_moe.py
  • version.txt
🧰 Additional context used
🧬 Code graph analysis (2)
flashinfer/fused_moe/core.py (1)
flashinfer/jit/core.py (1)
  • warning_once (78-83)
benchmarks/routines/moe.py (3)
csrc/trtllm_fused_moe_kernel_launcher.cu (17)
  • args (142-144)
  • args (419-428)
  • args (419-421)
  • args (536-558)
  • args (536-538)
  • args (728-752)
  • args (728-730)
  • args (979-1006)
  • args (979-982)
  • top_k (490-517)
  • top_k (490-493)
  • top_k (683-710)
  • top_k (683-687)
  • top_k (913-938)
  • top_k (913-916)
  • top_k (1223-1249)
  • top_k (1223-1226)
flashinfer/fused_moe/core.py (1)
  • WeightLayout (163-170)
include/flashinfer/trtllm/fused_moe/runner.h (2)
  • top_k (270-270)
  • local_num_experts (277-277)
🔇 Additional comments (13)
version.txt (1)

1-1: LGTM: Version bump is appropriate.

The version increment from 0.5.2 to 0.5.3 is suitable for a performance optimization release.

benchmarks/README.md (1)

169-170: LGTM: Documentation accurately describes the new parameter.

The tile_tokens_dim parameter is properly documented with its default value.

csrc/trtllm_fused_moe_kernel_launcher.cu (2)

1389-1391: Clarify whether this is actually incorrect behavior.

The comments state "This seems incorrect but we match the original behavior." If there's a genuine bug where tile_N is incorrectly passed as tile_tokens_dim, it should be fixed rather than documented and perpetuated.

However, if tile_N (tile size in the N/token dimension) is semantically equivalent to tile_tokens_dim, the comment is misleading and should be revised or removed.

Please verify:

  1. Are tile_N and tile_tokens_dim semantically equivalent concepts?
  2. If they differ, what is the correct value to pass?
  3. If the behavior is correct, update the comment to clarify rather than suggest incorrectness.

1473-1475: Same concern: clarify whether this behavior is correct.

This is a duplicate of the concern at lines 1389-1391. The comment suggests incorrect behavior but preserves it. Please verify if this is genuinely a bug or just confusing naming.

benchmarks/samples/sample_testlist_output.txt (2)

295-295: LGTM: Test output correctly includes the new parameter.

The tile_tokens_dim=8 additions to test configurations are consistent with the documented default value.

Also applies to: 306-306, 317-317, 328-328, 350-350


339-339: Note: Routing method change in test configuration.

Line 339 shows routing_method='renormalize' in this test output. While this change appears unrelated to the tile_tokens_dim additions, please verify this routing method change is intentional.

include/flashinfer/comm/trtllm_allreduce_fusion.cuh (4)

534-552: LGTM: Well-designed helper function for pipelined FP4 conversion.

The fp32_pair_to_e2m1 function enables pipelined processing to reduce register usage. The implementation correctly:

  • Uses inline PTX for efficient conversion on SM 10.0+
  • Documents the register allocation behavior
  • Provides safe fallback for older architectures
  • Extracts the packed result correctly

625-701: LGTM: Effective optimizations to reduce register pressure.

The changes to cvt_warp_fp16_to_fp4 are well-designed:

  1. Line 626: Pre-computing RECIPROCAL_6 eliminates repeated division operations.

  2. Lines 646-667: The SF computation is mathematically equivalent but more efficient:

    • Computes quantized_sf directly from the quantized value
    • Derives outputScale = SFScaleVal / quantized_sf in one step
    • Avoids storing intermediate SFValue
  3. Lines 675-694: The conversion loop optimization significantly reduces register pressure:

    • Uses single float2 register (8 bytes) instead of array (32 bytes)
    • Converts and packs immediately using the new fp32_pair_to_e2m1 helper
    • Maintains correctness while improving pipelining

1132-1157: LGTM: Efficient register usage optimization for FP32 accumulation.

The change to process elements one at a time instead of storing an acc_f32[VEC_SIZE] array reduces register usage from 32 bytes to 4 bytes (single scalar), while maintaining mathematical correctness. This is a straightforward and effective optimization for FP32 accumulation paths.


1428-1475: LGTM: FP4-specific block size optimization improves occupancy.

The FP4 block size selection logic is well-structured:

  1. Lines 1433-1452: FP4 path tries specific block sizes (160, 192, 128) for better occupancy before the SM count check, preventing them from being overridden.

  2. Lines 1456-1464: SM count check respects the FP4-optimized block_size if already set.

  3. Lines 1467-1469: Non-FP4 paths update block_size from threads_per_block.

  4. Lines 1472-1473: Final check correctly uses block_size instead of threads_per_block.

The logic correctly handles both FP4 and non-FP4 paths without conflict.

benchmarks/routines/flashinfer_benchmark_utils.py (1)

56-56: LGTM: Output schema correctly includes the new parameter.

Adding tile_tokens_dim to the MOE output columns aligns with the parameter's introduction throughout the codebase and enables benchmarks to report this metric.

tests/moe/test_trtllm_gen_routed_fused_moe.py (1)

183-183: LGTM: Tests correctly pass None for the new optional parameter.

The additions of None, # tile_tokens_dim to both function calls maintain existing test behavior while accommodating the new parameter signature.

Also applies to: 237-237

benchmarks/bench_trtllm_gen_fused_moe_autotuner.py (1)

99-125: Autotuner call‑sites correctly adapted to new core signatures

Passing None as the new tile_tokens_dim argument for the FP8 block‑scale, FP8 per‑tensor, and FP4 autotuner paths keeps these benchmarks compatible with the updated flashinfer.fused_moe.core APIs while preserving the original behavior (since tile_tokens_dim is ignored when None).

Looks good as a minimal, non‑functional adjustment.

Also applies to: 127-149, 265-297

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants