Skip to content

Conversation

@zongfeijing
Copy link
Collaborator

@zongfeijing zongfeijing commented Dec 2, 2025

Summary by CodeRabbit

  • New Features

    • Added gather-based fusion mode for grouped GEMM operations with token mapping support.
    • Added FP4 unswizzled scale inference utility function.
  • Tests

    • Added test for gather-based grouped GEMM with SwiGLU fusion.
  • Chores

    • Updated pre-commit configuration to ignore "subtiles" in spelling checks.

✏️ Tip: You can customize this high-level summary in your review settings.

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@zongfeijing zongfeijing changed the title Add gather fc1 kernel [None] [feat] Add gather fc1 kernel by cuteDSL Dec 2, 2025
@zongfeijing zongfeijing changed the title [None] [feat] Add gather fc1 kernel by cuteDSL [TRTLLM-9371] [feat] Add gather fc1 kernel by cuteDSL Dec 3, 2025
@kaiyux kaiyux changed the title [TRTLLM-9371] [feat] Add gather fc1 kernel by cuteDSL [TRTLLM-9685] [feat] Add gather fc1 kernel by cuteDSL Dec 4, 2025
@zongfeijing zongfeijing force-pushed the user/zongfeij/gather_fc1 branch 3 times, most recently from e96e2e1 to a61ad60 Compare December 5, 2025 12:13
@zongfeijing zongfeijing marked this pull request as ready for review December 5, 2025 12:13
@zongfeijing zongfeijing requested review from a team as code owners December 5, 2025 12:13
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 5, 2025

📝 Walkthrough

Walkthrough

The changes introduce a gather-based fusion path for grouped GEMM operations in NVFP4, replacing an explicit two-step permutation flow with a single fused kernel. Updates include a new gather fusion runner class, token ID mapping generation, shape inference adjustments, and comprehensive testing for the new kernel pathway.

Changes

Cohort / File(s) Summary
Configuration
.pre-commit-config.yaml
Added "subtiles" to the codespell ignore list.
Core Gather Fusion Implementation
tensorrt_llm/_torch/custom_ops/cute_dsl_custom_ops.py
Introduced gather-based fusion infrastructure: extended GroupedGemmInputsHelper with fuse_gather flag and shape inference logic; added generate_token_id_mapping() and inputs_pre_hook_gather_fusion() methods; created new Sm100BlockScaledContiguousGatherGroupedGemmSwigluFusionRunner class with tactic discovery, tuning config, and kernel invocation; imported gather-based kernel support.
Integration & Utilities
tensorrt_llm/_torch/modules/fused_moe/fused_moe_cute_dsl.py, tensorrt_llm/_torch/utils.py
Replaced two-step moe_permute + grouped_gemm flow with single fused cute_dsl_nvfp4_gather_grouped_gemm_swiglu_blackwell operation; added tile mappings and token_id_mapping parameters; added fp4_unswizzled_scale_infer_shape() utility function for unswizzled FP4 scale shape computation.
Testing
tests/unittest/_torch/thop/parallel/test_cute_dsl_moe.py
Added comprehensive test test_nvfp4_gather_grouped_gemm_swiglu_blackwell() with multiple parameterizations (tile_size, ep_size, top_k, num_tokens), including token routing, per-expert token counting, FP4 quantization, reference comparison, and output validation with mask-based handling of padding tokens.

Sequence Diagram(s)

sequenceDiagram
    participant Caller as MoE Forward Pass
    participant Helper as GroupedGemmInputsHelper
    participant Runner as Sm100GatherGroupedGemmRunner
    participant KernelExec as NVFP4 Gather Kernel
    
    Caller->>Helper: generate_token_id_mapping(num_tokens, expert_counts)
    Helper->>Helper: Create token-to-expert mapping
    Helper-->>Caller: token_id_mapping
    
    Caller->>Runner: initialize with gather=True
    Runner->>Runner: Validate tile_size & SM version
    
    Caller->>Helper: inputs_pre_hook_gather_fusion(inputs)
    Helper->>Helper: Prepare tile_idx_to_group_idx,<br/>tile_idx_to_mn_limit,<br/>token_id_mapping tensors
    Helper-->>Caller: augmented_inputs
    
    Caller->>Runner: get_valid_tactics(inputs, profile)
    Runner->>Runner: Enumerate gather-specific tactics<br/>based on tile_size, top_k, etc.
    Runner-->>Caller: tactic_list
    
    Caller->>Runner: forward(inputs, tactic)
    Runner->>Runner: Marshal kernel pointers
    Runner->>KernelExec: Invoke with token_id_mapping & mappings
    KernelExec->>KernelExec: Gather tokens per expert<br/>compute grouped GEMM<br/>apply SwiGLU
    KernelExec-->>Runner: fused_output
    Runner-->>Caller: output
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

  • tensorrt_llm/_torch/custom_ops/cute_dsl_custom_ops.py: Dense logic for gather fusion runner, shape inference with conditional behavior, token_id_mapping generation pathway, and dynamic tactic/kernel selection based on gather parameters. Requires careful review of the new pre-hook and shape inference dependencies.
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_cute_dsl.py: Verify replacement of two-step flow with fused operation; ensure all required parameters (top_k, num_local_experts, local_expert_offset, tile_size, mappings) are correctly threaded through.
  • tests/unittest/_torch/thop/parallel/test_cute_dsl_moe.py: Complex test setup with token routing, FP4 quantization, reference path, and mask-based padding handling; validate correctness of comparison logic and assertion thresholds.
  • Cross-file consistency: Ensure GroupedGemmInputsHelper changes and new runner class work together seamlessly in both standalone and integrated contexts.

Pre-merge checks and finishing touches

❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Description check ⚠️ Warning The PR description is entirely the template with no actual content filled in—no description of the feature, test coverage explanation, or rationale provided. Fill in the Description and Test Coverage sections with details about the gather fc1 kernel feature and the test cases added for validation.
Docstring Coverage ⚠️ Warning Docstring coverage is 23.08% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and specifically describes the main change: adding a gather fc1 kernel using cuteDSL, with a proper JIRA ticket reference and [feat] type designation.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (5)
tensorrt_llm/_torch/utils.py (1)

294-300: Unsizzled FP4 scale shape helper looks correct; minor lint nit

fp4_unswizzled_scale_infer_shape correctly mirrors fp4_scale_infer_shape with is_swizzled_layout=False, which matches its intended use in the gather path. The local out_shape binding is unused (same as in fp4_scale_infer_shape), which triggers RUF059; you can either drop it or rename to _out_shape in both helpers to quiet lint.

tests/unittest/_torch/thop/parallel/test_cute_dsl_moe.py (1)

646-858: Gather-based NVFP4 SwiGLU test is solid; only micro cleanups are optional

The test exercises the full gather path (routing, tile metadata, token_id_mapping, GEMM+SwiGLU, FP4 quant, and per-token scales) and correctly masks out padding positions when comparing outputs and scale factors, which is what matters for functional validation.

If you want to simplify a bit (optional):

  • valid_token_mask and the scale selection loops can be vectorized using mask = token_id_mapping[:num_valid_permuted_tokens] >= 0 and boolean indexing instead of Python for loops.
  • The __main__ block will bypass the @pytest.mark.skipif SM check if someone runs the file directly on a non-SM100 GPU; consider guarding that path or relying on pytest only.

These are style/ergonomics only; the current logic is correct.

tensorrt_llm/_torch/custom_ops/cute_dsl_custom_ops.py (3)

25-41: Gather-specific helper extensions are coherent; unify padding sentinels if possible

The GroupedGemmInputsHelper extensions for gather mode (fuse_gather, shape_infer_tensor_idx, generate_token_id_mapping, and inputs_pre_hook_gather_fusion) are consistent with the existing tile/permute helpers and with how the gather kernel expects token_id_mapping (expanded indices with -1 for padding). Using shape_infer_tensor_idx to switch inference between a and token_id_mapping keeps the same helper usable for both fused and gather paths.

One small consistency nit: for tile-related metadata you use self.pad_val = int(2e9) as the padding sentinel, while token_id_mapping uses -1. If the kernels rely on these sentinels in more than one place, it may be safer to centralize them as named constants (or at least document the two different “invalid” conventions in the class docstring) to avoid divergence later. Functionally this is fine as-is.

Also applies to: 74-85, 143-176, 238-283


1663-1780: Gather SwiGLU fusion runner and custom op look aligned with existing runners; only lint/clarity nits

The new Sm100BlockScaledContiguousGatherGroupedGemmSwigluFusionRunner and cute_dsl_nvfp4_gather_grouped_gemm_swiglu_blackwell custom op mirror the existing contiguous SwiGLU runner:

  • Shape logic uses m = token_id_mapping.size(0) (post-gather, permuted M) and orig_m = a.size(0) (original tokens) and enforces the same NVFP4 alignment constraints on K/N and scales.
  • Tuning config correctly switches dynamic tensor specs and constraints to the gather view, and uses fp4_unswizzled_scale_infer_shape for input_scale, which matches the unswizzled layout the kernel consumes.
  • Kernel launch wiring (pointers, extra tile_idx_to_mn_limit and token_id_mapping, plus orig_m/m/n/k/l) is consistent with the other CuTe DSL kernels’ pattern.

Minor, non-blocking polish you can do if you care about lint/readability:

  • In get_valid_tactics, several unpacked inputs (a_sf, b_sf, alpha, tile_idx_to_group_idx, tile_idx_to_mn_limit) are unused; prefix them with _ to silence RUF059.
  • The register_fake implementation has a long parameter list where many arguments are unused; same trick (_weight_scale, _alpha, etc.) will quiet ARG00x warnings without changing behavior.
  • If you ever refactor, consider renaming the local l dimension variables to something less ambiguous (e.g., num_experts_local) to satisfy E741 and improve clarity.

These are purely cosmetic; the functional structure of the gather fusion runner and op looks correct.

Also applies to: 1781-1944, 1945-2017


2019-2044: FusedMoEInputsHelper is defined twice; consolidate to a single definition

There are now two identical FusedMoEInputsHelper class definitions in this module: one earlier in the file (around line 285) and this newly added one. Because the second definition overwrites the first when IS_CUTLASS_DSL_AVAILABLE is true, this duplication is harmless today but makes future maintenance fragile (changes might inadvertently be applied to only one copy).

Recommend keeping a single FusedMoEInputsHelper definition (imported where needed) and removing the duplicate to avoid confusion.

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 68253d9 and a61ad60.

📒 Files selected for processing (5)
  • .pre-commit-config.yaml (1 hunks)
  • tensorrt_llm/_torch/custom_ops/cute_dsl_custom_ops.py (7 hunks)
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_cute_dsl.py (1 hunks)
  • tensorrt_llm/_torch/utils.py (1 hunks)
  • tests/unittest/_torch/thop/parallel/test_cute_dsl_moe.py (1 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: The code developed for TensorRT-LLM should conform to Python 3.8+
Indent Python code with 4 spaces; do not use tabs
Always maintain the namespace when importing in Python, even if only one class or function from a module is used (e.g., use from package.subpackage import foo and then foo.SomeClass() instead of from package.subpackage.foo import SomeClass)
Python filenames should use snake_case (e.g., some_file.py)
Python class names should use PascalCase (e.g., class SomeClass)
Python function and method names should use snake_case (e.g., def my_awesome_function():)
Python local variable names should use snake_case, with prefix k for variable names that start with a number (e.g., k_99th_percentile = ...)
Python global variables should use upper snake_case with prefix G (e.g., G_MY_GLOBAL = ...)
Python constants should use upper snake_case (e.g., MY_CONSTANT = ...)
Avoid shadowing variables declared in an outer scope in Python
Initialize all externally visible members of a Python class in the constructor
For Python interfaces that may be used outside a file, prefer docstrings over comments
Python comments should be reserved for code within a function, or interfaces that are local to a file
Use Google style docstrings for Python classes and functions, which can be parsed by Sphinx
Python attributes and variables can be documented inline with type and description (e.g., self.x = 5 followed by """<type>: Description of 'x'""" )
Avoid using reflection in Python when functionality can be easily achieved without reflection
When using try-except blocks in Python, limit the except clause to the smallest set of specific errors possible instead of catching all exceptions
When using try-except blocks in Python to handle multiple possible variable types (duck-typing), keep the body of the try as small as possible and use the else block to implement the logic

Files:

  • tensorrt_llm/_torch/utils.py
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_cute_dsl.py
  • tests/unittest/_torch/thop/parallel/test_cute_dsl_moe.py
  • tensorrt_llm/_torch/custom_ops/cute_dsl_custom_ops.py
**/*.{cpp,h,cu,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

All TensorRT-LLM Open Source Software code files should contain an NVIDIA copyright header that includes the current year at the top

Files:

  • tensorrt_llm/_torch/utils.py
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_cute_dsl.py
  • tests/unittest/_torch/thop/parallel/test_cute_dsl_moe.py
  • tensorrt_llm/_torch/custom_ops/cute_dsl_custom_ops.py
🧠 Learnings (10)
📓 Common learnings
Learnt from: nv-lschneider
Repo: NVIDIA/TensorRT-LLM PR: 7910
File: cpp/tensorrt_llm/thop/allreduceOp.cpp:352-446
Timestamp: 2025-09-23T15:12:38.312Z
Learning: In TensorRT-LLM NCCL device allreduce implementation (cpp/tensorrt_llm/thop/allreduceOp.cpp), the goto pattern in runNCCLAllReduceDeviceFusion is intentionally used for future extensibility, allowing multiple switch cases to fallback to the default handler. While not aesthetically ideal, this pattern supports adding more fusion cases later that can reuse the same fallback logic.
Learnt from: tongyuantongyu
Repo: NVIDIA/TensorRT-LLM PR: 7520
File: tensorrt_llm/_torch/pyexecutor/resource_manager.py:605-613
Timestamp: 2025-09-24T03:31:28.908Z
Learning: In TensorRT-LLM Ray orchestrator mode, ProcessGroups are initialized with both Gloo and NCCL backends (e.g., "cuda:nccl,cpu:gloo"), allowing PyTorch distributed to automatically route CPU tensors through Gloo and GPU tensors through NCCL. This eliminates the need for manual device placement when performing allreduce operations on base types.
Learnt from: sklevtsov-nvidia
Repo: NVIDIA/TensorRT-LLM PR: 3294
File: cpp/tensorrt_llm/cutlass_extensions/include/cutlass_extensions/epilogue/fusion/sm90_visitor_scatter.hpp:0-0
Timestamp: 2025-08-08T05:10:38.906Z
Learning: The ScaledAccPerRowBiasPerColScaleScatter fusion in CUTLASS extensions (cpp/tensorrt_llm/cutlass_extensions/include/cutlass_extensions/epilogue/fusion/sm90_visitor_scatter.hpp) is specifically designed for per-column scaling factors only, so it uses a fixed Stride<_0,_1,int64_t> rather than conditional stride logic.
Learnt from: sklevtsov-nvidia
Repo: NVIDIA/TensorRT-LLM PR: 3294
File: cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu:1198-1209
Timestamp: 2025-08-08T22:03:40.707Z
Learning: In the CUTLASS MoE kernels (cpp/tensorrt_llm/cutlass_extensions), when `layout_info.fusion` is set to `TmaWarpSpecializedGroupedGemmInput::EpilogueFusion::FINALIZE`, the `router_scales` parameter must be non-null by design. The fused finalize kernel epilogue does not perform nullptr checks and requires valid router scales to function correctly. This is an implicit contract that callers must satisfy when enabling the FINALIZE fusion mode.
📚 Learning: 2025-08-09T20:57:04.084Z
Learnt from: sklevtsov-nvidia
Repo: NVIDIA/TensorRT-LLM PR: 3294
File: cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_gemm_tma_warp_specialized_input.cu:118-127
Timestamp: 2025-08-09T20:57:04.084Z
Learning: In the CUTLASS MoE finalize fusion implementation (cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_gemm_tma_warp_specialized_input.cu), when setting `fused_finalize_epilogue.stride_final_output` with shape `(hidden_size, num_output_tokens, 1)`, the `num_rows_in_final_output` should be set to `num_output_tokens` (not `hidden_size`) because of a swap+transpose operation that maps rows of the output tensor to `hidden_size` and columns to `num_output_tokens`.

Applied to files:

  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_cute_dsl.py
  • tensorrt_llm/_torch/custom_ops/cute_dsl_custom_ops.py
📚 Learning: 2025-08-08T22:03:40.707Z
Learnt from: sklevtsov-nvidia
Repo: NVIDIA/TensorRT-LLM PR: 3294
File: cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu:1198-1209
Timestamp: 2025-08-08T22:03:40.707Z
Learning: In the CUTLASS MoE kernels (cpp/tensorrt_llm/cutlass_extensions), when `layout_info.fusion` is set to `TmaWarpSpecializedGroupedGemmInput::EpilogueFusion::FINALIZE`, the `router_scales` parameter must be non-null by design. The fused finalize kernel epilogue does not perform nullptr checks and requires valid router scales to function correctly. This is an implicit contract that callers must satisfy when enabling the FINALIZE fusion mode.

Applied to files:

  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_cute_dsl.py
📚 Learning: 2025-08-14T23:23:27.449Z
Learnt from: djns99
Repo: NVIDIA/TensorRT-LLM PR: 6915
File: cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu:4010-4012
Timestamp: 2025-08-14T23:23:27.449Z
Learning: For MOE (Mixture of Experts) code reviews in TensorRT-LLM, avoid repeatedly suggesting finalize fusion validation checks and safety assertions. The user djns99 has indicated these suggestions are repetitive and unwanted across multiple MOE-related changes.

Applied to files:

  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_cute_dsl.py
📚 Learning: 2025-08-21T02:39:12.009Z
Learnt from: djns99
Repo: NVIDIA/TensorRT-LLM PR: 7104
File: cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu:1475-1480
Timestamp: 2025-08-21T02:39:12.009Z
Learning: The min latency mode functionality in TensorRT-LLM MOE kernels (cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu) is deprecated and no longer being maintained/updated, as confirmed by djns99. Bug reports and optimization suggestions for the computeStridesTmaWarpSpecializedLowLatencyKernel and related min latency code paths should be deprioritized.

Applied to files:

  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_cute_dsl.py
📚 Learning: 2025-08-19T03:35:20.866Z
Learnt from: djns99
Repo: NVIDIA/TensorRT-LLM PR: 6915
File: cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu:4616-4626
Timestamp: 2025-08-19T03:35:20.866Z
Learning: In the MOE profiler TMA workspace preparation (cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu), the overlapping of TMA WS regions for NONE and FINALIZE variants is deliberate design to save memory space, as confirmed by djns99. The comment "reuse the same pointers to save space" reflects this intentional behavior.

Applied to files:

  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_cute_dsl.py
📚 Learning: 2025-08-20T07:43:36.447Z
Learnt from: ChristinaZ
Repo: NVIDIA/TensorRT-LLM PR: 7068
File: cpp/tensorrt_llm/kernels/moeTopKFuncs.cuh:169-172
Timestamp: 2025-08-20T07:43:36.447Z
Learning: In TensorRT-LLM MOE kernels, when processing up to 128 experts across 32 threads, each thread handles at most 4 experts (N < 5 constraint), where N represents candidates per thread rather than total system capacity.

Applied to files:

  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_cute_dsl.py
  • tensorrt_llm/_torch/custom_ops/cute_dsl_custom_ops.py
📚 Learning: 2025-09-23T14:58:05.372Z
Learnt from: nv-lschneider
Repo: NVIDIA/TensorRT-LLM PR: 7910
File: cpp/tensorrt_llm/kernels/nccl_device/config.cu:42-49
Timestamp: 2025-09-23T14:58:05.372Z
Learning: In TensorRT-LLM NCCL device kernels (cpp/tensorrt_llm/kernels/nccl_device/), the token partitioning intentionally uses ceil-like distribution (same token_per_rank for all ranks) to ensure all ranks launch the same number of blocks. This is required for optimal NCCL device API barrier performance, even though it may launch extra blocks for non-existent tokens on later ranks. Runtime bounds checking in the kernel (blockID validation) handles the overshoot cases.

Applied to files:

  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_cute_dsl.py
📚 Learning: 2025-09-19T21:28:13.751Z
Learnt from: jhaotingc
Repo: NVIDIA/TensorRT-LLM PR: 7856
File: cpp/tensorrt_llm/thop/fp8BlockScaleMoe.cpp:159-166
Timestamp: 2025-09-19T21:28:13.751Z
Learning: In TensorRT-LLM blockScaleMoe routing (cpp/tensorrt_llm/kernels/trtllmGenKernels/blockScaleMoe/runner.cu), the DeepSeek routing method performs reinterpret_cast<float*>(routingLogits) at line 89, which could cause issues if routing_logits are BF16. However, Qwen3-FP8 models use RenormalizeNaive routing method and are not affected by this dtype casting issue.

Applied to files:

  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_cute_dsl.py
📚 Learning: 2025-09-23T15:12:38.312Z
Learnt from: nv-lschneider
Repo: NVIDIA/TensorRT-LLM PR: 7910
File: cpp/tensorrt_llm/thop/allreduceOp.cpp:352-446
Timestamp: 2025-09-23T15:12:38.312Z
Learning: In TensorRT-LLM NCCL device allreduce implementation (cpp/tensorrt_llm/thop/allreduceOp.cpp), the goto pattern in runNCCLAllReduceDeviceFusion is intentionally used for future extensibility, allowing multiple switch cases to fallback to the default handler. While not aesthetically ideal, this pattern supports adding more fusion cases later that can reuse the same fallback logic.

Applied to files:

  • tensorrt_llm/_torch/custom_ops/cute_dsl_custom_ops.py
🧬 Code graph analysis (2)
tensorrt_llm/_torch/modules/fused_moe/fused_moe_cute_dsl.py (1)
tensorrt_llm/_torch/custom_ops/cute_dsl_custom_ops.py (1)
  • cute_dsl_nvfp4_gather_grouped_gemm_swiglu_blackwell (1949-1985)
tensorrt_llm/_torch/custom_ops/cute_dsl_custom_ops.py (2)
tensorrt_llm/_torch/utils.py (3)
  • fp4_scale_infer_shape (286-291)
  • fp4_unswizzled_scale_infer_shape (294-300)
  • _ (227-233)
tensorrt_llm/_torch/cute_dsl_kernels/blackwell/blockscaled_contiguous_grouped_gemm_swiglu_fusion.py (1)
  • can_implement (2525-2607)
🪛 Ruff (0.14.7)
tensorrt_llm/_torch/utils.py

297-297: Unpacked variable out_shape is never used

Prefix it with an underscore or any other dummy variable pattern

(RUF059)

tests/unittest/_torch/thop/parallel/test_cute_dsl_moe.py

703-703: zip() without an explicit strict= parameter

Add explicit value for parameter strict=

(B905)

tensorrt_llm/_torch/custom_ops/cute_dsl_custom_ops.py

160-160: Loop control variable i not used within loop body

Rename unused i to _i

(B007)


1666-1666: Mutable class attributes should be annotated with typing.ClassVar

(RUF012)


1667-1667: Mutable class attributes should be annotated with typing.ClassVar

(RUF012)


1682-1684: Avoid specifying long messages outside the exception class

(TRY003)


1689-1691: Avoid specifying long messages outside the exception class

(TRY003)


1706-1706: Unused method argument: profile

(ARG002)


1707-1707: Unused method argument: kwargs

(ARG002)


1709-1709: Unpacked variable a_sf is never used

Prefix it with an underscore or any other dummy variable pattern

(RUF059)


1709-1709: Unpacked variable b_sf is never used

Prefix it with an underscore or any other dummy variable pattern

(RUF059)


1709-1709: Unpacked variable alpha is never used

Prefix it with an underscore or any other dummy variable pattern

(RUF059)


1709-1709: Unpacked variable tile_idx_to_group_idx is never used

Prefix it with an underscore or any other dummy variable pattern

(RUF059)


1709-1709: Unpacked variable tile_idx_to_mn_limit is never used

Prefix it with an underscore or any other dummy variable pattern

(RUF059)


1713-1713: Ambiguous variable name: l

(E741)


1723-1723: Avoid specifying long messages outside the exception class

(TRY003)


1798-1798: Ambiguous variable name: l

(E741)


1993-1993: Unused function argument: weight_scale

(ARG001)


1994-1994: Unused function argument: alpha

(ARG001)


1995-1995: Unused function argument: tile_idx_to_group_idx

(ARG001)


1996-1996: Unused function argument: tile_idx_to_mn_limit

(ARG001)


1998-1998: Unused function argument: num_non_exiting_tiles

(ARG001)


1999-1999: Unused function argument: global_sf

(ARG001)


2000-2000: Unused function argument: num_experts

(ARG001)


2001-2001: Unused function argument: top_k

(ARG001)


2002-2002: Unused function argument: num_local_experts

(ARG001)


2003-2003: Unused function argument: local_expert_offset

(ARG001)


2004-2004: Unused function argument: tile_size

(ARG001)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (2)
.pre-commit-config.yaml (1)

1435-1442: Codespell ignore-list update is fine

Adding subtiles to the -L ignore list is a benign configuration tweak and won’t affect runtime behavior.

tensorrt_llm/_torch/modules/fused_moe/fused_moe_cute_dsl.py (1)

377-404: Gather-based fused path wiring looks consistent; please double-check scale/layout expectations

The new call to cute_dsl_nvfp4_gather_grouped_gemm_swiglu_blackwell is wired consistently with the custom op signature: you pass FP4-packed activations/weights, unswizzled input scales (x_sf), interleaved FC1 weights, tile_idx_to_group_idx, tile_idx_to_mn_limit, and permuted_idx_to_expanded_idx as token_id_mapping, plus num_non_exiting_tiles and fc2_input_scale as global_sf. The subsequent finalize path reuses the same tile_idx_to_mn_limit and permuted_idx_to_expanded_idx, preserving the original unpermute semantics.

Given the subtle layout contracts in the NVFP4 path, please verify at call sites that:

  • x_sf is always in the unswizzled layout expected by the gather kernel when viewed as torch.uint8, including after the optional DP allgather.
  • self.fc2_input_scale has the same shape (scalar or length-1 tensor) that the kernel and tests expect for global_sf.

If both hold, the integration should behave as intended.

@zongfeijing
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #27122 [ run ] triggered by Bot. Commit: a61ad60

@tensorrt-cicd
Copy link
Collaborator

PR_Github #27122 [ run ] completed with state SUCCESS. Commit: a61ad60
/LLM/main/L0_MergeRequest_PR pipeline #20692 completed with status: 'FAILURE'

@zongfeijing
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #27170 [ run ] triggered by Bot. Commit: a61ad60

@tensorrt-cicd
Copy link
Collaborator

PR_Github #27170 [ run ] completed with state SUCCESS. Commit: a61ad60
/LLM/main/L0_MergeRequest_PR pipeline #20732 completed with status: 'FAILURE'

Signed-off-by: Zongfei Jing <[email protected]>
Signed-off-by: Zongfei Jing <[email protected]>
Signed-off-by: Zongfei Jing <[email protected]>
Signed-off-by: Zongfei Jing <[email protected]>
Signed-off-by: Zongfei Jing <[email protected]>
Signed-off-by: Zongfei Jing <[email protected]>
@zongfeijing zongfeijing requested a review from QiJune December 8, 2025 10:24
Copy link
Collaborator

@QiJune QiJune left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@tensorrt-cicd
Copy link
Collaborator

PR_Github #27244 [ run ] completed with state SUCCESS. Commit: 78765c9
/LLM/main/L0_MergeRequest_PR pipeline #20802 completed with status: 'SUCCESS'
Pipeline passed with automatic retried tests. Check the rerun report for details.

@zongfeijing
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #27304 [ run ] triggered by Bot. Commit: 4dee6bf

@tensorrt-cicd
Copy link
Collaborator

PR_Github #27304 [ run ] completed with state SUCCESS. Commit: 4dee6bf
/LLM/main/L0_MergeRequest_PR pipeline #20854 completed with status: 'SUCCESS'
Pipeline passed with automatic retried tests. Check the rerun report for details.

@zongfeijing
Copy link
Collaborator Author

Hold on this PR, waiting for perf analysis.

Signed-off-by: Zongfei Jing <[email protected]>
@zongfeijing
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #27524 [ run ] triggered by Bot. Commit: 2c72df0

@tensorrt-cicd
Copy link
Collaborator

PR_Github #27524 [ run ] completed with state SUCCESS. Commit: 2c72df0
/LLM/main/L0_MergeRequest_PR pipeline #21001 completed with status: 'FAILURE'

@zongfeijing
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #27570 [ run ] triggered by Bot. Commit: 2c72df0

@zongfeijing zongfeijing enabled auto-merge (squash) December 10, 2025 02:39
@zongfeijing zongfeijing disabled auto-merge December 10, 2025 08:32
@zongfeijing
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #27757 [ run ] triggered by Bot. Commit: 2c72df0

@tensorrt-cicd
Copy link
Collaborator

PR_Github #27757 [ run ] completed with state SUCCESS. Commit: 2c72df0
/LLM/main/L0_MergeRequest_PR pipeline #21184 completed with status: 'SUCCESS'

@zongfeijing zongfeijing merged commit c76b428 into NVIDIA:main Dec 11, 2025
5 checks passed
usberkeley pushed a commit to usberkeley/TensorRT-LLM that referenced this pull request Dec 11, 2025
codego7250 pushed a commit to codego7250/TensorRT-LLM that referenced this pull request Dec 11, 2025
codego7250 pushed a commit to codego7250/TensorRT-LLM that referenced this pull request Dec 13, 2025
@zongfeijing zongfeijing deleted the user/zongfeij/gather_fc1 branch December 15, 2025 02:32
sherry-1001 pushed a commit to sherry-1001/TensorRT-LLM that referenced this pull request Dec 16, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants