-
Notifications
You must be signed in to change notification settings - Fork 2k
[TRTLLM-9685] [feat] Add gather fc1 kernel by cuteDSL #9618
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[TRTLLM-9685] [feat] Add gather fc1 kernel by cuteDSL #9618
Conversation
e96e2e1 to
a61ad60
Compare
📝 WalkthroughWalkthroughThe changes introduce a gather-based fusion path for grouped GEMM operations in NVFP4, replacing an explicit two-step permutation flow with a single fused kernel. Updates include a new gather fusion runner class, token ID mapping generation, shape inference adjustments, and comprehensive testing for the new kernel pathway. Changes
Sequence Diagram(s)sequenceDiagram
participant Caller as MoE Forward Pass
participant Helper as GroupedGemmInputsHelper
participant Runner as Sm100GatherGroupedGemmRunner
participant KernelExec as NVFP4 Gather Kernel
Caller->>Helper: generate_token_id_mapping(num_tokens, expert_counts)
Helper->>Helper: Create token-to-expert mapping
Helper-->>Caller: token_id_mapping
Caller->>Runner: initialize with gather=True
Runner->>Runner: Validate tile_size & SM version
Caller->>Helper: inputs_pre_hook_gather_fusion(inputs)
Helper->>Helper: Prepare tile_idx_to_group_idx,<br/>tile_idx_to_mn_limit,<br/>token_id_mapping tensors
Helper-->>Caller: augmented_inputs
Caller->>Runner: get_valid_tactics(inputs, profile)
Runner->>Runner: Enumerate gather-specific tactics<br/>based on tile_size, top_k, etc.
Runner-->>Caller: tactic_list
Caller->>Runner: forward(inputs, tactic)
Runner->>Runner: Marshal kernel pointers
Runner->>KernelExec: Invoke with token_id_mapping & mappings
KernelExec->>KernelExec: Gather tokens per expert<br/>compute grouped GEMM<br/>apply SwiGLU
KernelExec-->>Runner: fused_output
Runner-->>Caller: output
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes
Pre-merge checks and finishing touches❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (5)
tensorrt_llm/_torch/utils.py (1)
294-300: Unsizzled FP4 scale shape helper looks correct; minor lint nit
fp4_unswizzled_scale_infer_shapecorrectly mirrorsfp4_scale_infer_shapewithis_swizzled_layout=False, which matches its intended use in the gather path. The localout_shapebinding is unused (same as infp4_scale_infer_shape), which triggers RUF059; you can either drop it or rename to_out_shapein both helpers to quiet lint.tests/unittest/_torch/thop/parallel/test_cute_dsl_moe.py (1)
646-858: Gather-based NVFP4 SwiGLU test is solid; only micro cleanups are optionalThe test exercises the full gather path (routing, tile metadata, token_id_mapping, GEMM+SwiGLU, FP4 quant, and per-token scales) and correctly masks out padding positions when comparing outputs and scale factors, which is what matters for functional validation.
If you want to simplify a bit (optional):
valid_token_maskand the scale selection loops can be vectorized usingmask = token_id_mapping[:num_valid_permuted_tokens] >= 0and boolean indexing instead of Pythonforloops.- The
__main__block will bypass the@pytest.mark.skipifSM check if someone runs the file directly on a non-SM100 GPU; consider guarding that path or relying on pytest only.These are style/ergonomics only; the current logic is correct.
tensorrt_llm/_torch/custom_ops/cute_dsl_custom_ops.py (3)
25-41: Gather-specific helper extensions are coherent; unify padding sentinels if possibleThe
GroupedGemmInputsHelperextensions for gather mode (fuse_gather,shape_infer_tensor_idx,generate_token_id_mapping, andinputs_pre_hook_gather_fusion) are consistent with the existing tile/permute helpers and with how the gather kernel expectstoken_id_mapping(expanded indices with-1for padding). Usingshape_infer_tensor_idxto switch inference betweenaandtoken_id_mappingkeeps the same helper usable for both fused and gather paths.One small consistency nit: for tile-related metadata you use
self.pad_val = int(2e9)as the padding sentinel, whiletoken_id_mappinguses-1. If the kernels rely on these sentinels in more than one place, it may be safer to centralize them as named constants (or at least document the two different “invalid” conventions in the class docstring) to avoid divergence later. Functionally this is fine as-is.Also applies to: 74-85, 143-176, 238-283
1663-1780: Gather SwiGLU fusion runner and custom op look aligned with existing runners; only lint/clarity nitsThe new
Sm100BlockScaledContiguousGatherGroupedGemmSwigluFusionRunnerandcute_dsl_nvfp4_gather_grouped_gemm_swiglu_blackwellcustom op mirror the existing contiguous SwiGLU runner:
- Shape logic uses
m = token_id_mapping.size(0)(post-gather, permuted M) andorig_m = a.size(0)(original tokens) and enforces the same NVFP4 alignment constraints on K/N and scales.- Tuning config correctly switches dynamic tensor specs and constraints to the gather view, and uses
fp4_unswizzled_scale_infer_shapeforinput_scale, which matches the unswizzled layout the kernel consumes.- Kernel launch wiring (pointers, extra
tile_idx_to_mn_limitandtoken_id_mapping, plusorig_m/m/n/k/l) is consistent with the other CuTe DSL kernels’ pattern.Minor, non-blocking polish you can do if you care about lint/readability:
- In
get_valid_tactics, several unpacked inputs (a_sf,b_sf,alpha,tile_idx_to_group_idx,tile_idx_to_mn_limit) are unused; prefix them with_to silence RUF059.- The register_fake implementation has a long parameter list where many arguments are unused; same trick (_weight_scale, _alpha, etc.) will quiet ARG00x warnings without changing behavior.
- If you ever refactor, consider renaming the local
ldimension variables to something less ambiguous (e.g.,num_experts_local) to satisfy E741 and improve clarity.These are purely cosmetic; the functional structure of the gather fusion runner and op looks correct.
Also applies to: 1781-1944, 1945-2017
2019-2044:FusedMoEInputsHelperis defined twice; consolidate to a single definitionThere are now two identical
FusedMoEInputsHelperclass definitions in this module: one earlier in the file (around line 285) and this newly added one. Because the second definition overwrites the first whenIS_CUTLASS_DSL_AVAILABLEis true, this duplication is harmless today but makes future maintenance fragile (changes might inadvertently be applied to only one copy).Recommend keeping a single
FusedMoEInputsHelperdefinition (imported where needed) and removing the duplicate to avoid confusion.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
.pre-commit-config.yaml(1 hunks)tensorrt_llm/_torch/custom_ops/cute_dsl_custom_ops.py(7 hunks)tensorrt_llm/_torch/modules/fused_moe/fused_moe_cute_dsl.py(1 hunks)tensorrt_llm/_torch/utils.py(1 hunks)tests/unittest/_torch/thop/parallel/test_cute_dsl_moe.py(1 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
**/*.py: The code developed for TensorRT-LLM should conform to Python 3.8+
Indent Python code with 4 spaces; do not use tabs
Always maintain the namespace when importing in Python, even if only one class or function from a module is used (e.g., usefrom package.subpackage import fooand thenfoo.SomeClass()instead offrom package.subpackage.foo import SomeClass)
Python filenames should use snake_case (e.g.,some_file.py)
Python class names should use PascalCase (e.g.,class SomeClass)
Python function and method names should use snake_case (e.g.,def my_awesome_function():)
Python local variable names should use snake_case, with prefixkfor variable names that start with a number (e.g.,k_99th_percentile = ...)
Python global variables should use upper snake_case with prefixG(e.g.,G_MY_GLOBAL = ...)
Python constants should use upper snake_case (e.g.,MY_CONSTANT = ...)
Avoid shadowing variables declared in an outer scope in Python
Initialize all externally visible members of a Python class in the constructor
For Python interfaces that may be used outside a file, prefer docstrings over comments
Python comments should be reserved for code within a function, or interfaces that are local to a file
Use Google style docstrings for Python classes and functions, which can be parsed by Sphinx
Python attributes and variables can be documented inline with type and description (e.g.,self.x = 5followed by"""<type>: Description of 'x'""")
Avoid using reflection in Python when functionality can be easily achieved without reflection
When using try-except blocks in Python, limit the except clause to the smallest set of specific errors possible instead of catching all exceptions
When using try-except blocks in Python to handle multiple possible variable types (duck-typing), keep the body of the try as small as possible and use the else block to implement the logic
Files:
tensorrt_llm/_torch/utils.pytensorrt_llm/_torch/modules/fused_moe/fused_moe_cute_dsl.pytests/unittest/_torch/thop/parallel/test_cute_dsl_moe.pytensorrt_llm/_torch/custom_ops/cute_dsl_custom_ops.py
**/*.{cpp,h,cu,py}
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
All TensorRT-LLM Open Source Software code files should contain an NVIDIA copyright header that includes the current year at the top
Files:
tensorrt_llm/_torch/utils.pytensorrt_llm/_torch/modules/fused_moe/fused_moe_cute_dsl.pytests/unittest/_torch/thop/parallel/test_cute_dsl_moe.pytensorrt_llm/_torch/custom_ops/cute_dsl_custom_ops.py
🧠 Learnings (10)
📓 Common learnings
Learnt from: nv-lschneider
Repo: NVIDIA/TensorRT-LLM PR: 7910
File: cpp/tensorrt_llm/thop/allreduceOp.cpp:352-446
Timestamp: 2025-09-23T15:12:38.312Z
Learning: In TensorRT-LLM NCCL device allreduce implementation (cpp/tensorrt_llm/thop/allreduceOp.cpp), the goto pattern in runNCCLAllReduceDeviceFusion is intentionally used for future extensibility, allowing multiple switch cases to fallback to the default handler. While not aesthetically ideal, this pattern supports adding more fusion cases later that can reuse the same fallback logic.
Learnt from: tongyuantongyu
Repo: NVIDIA/TensorRT-LLM PR: 7520
File: tensorrt_llm/_torch/pyexecutor/resource_manager.py:605-613
Timestamp: 2025-09-24T03:31:28.908Z
Learning: In TensorRT-LLM Ray orchestrator mode, ProcessGroups are initialized with both Gloo and NCCL backends (e.g., "cuda:nccl,cpu:gloo"), allowing PyTorch distributed to automatically route CPU tensors through Gloo and GPU tensors through NCCL. This eliminates the need for manual device placement when performing allreduce operations on base types.
Learnt from: sklevtsov-nvidia
Repo: NVIDIA/TensorRT-LLM PR: 3294
File: cpp/tensorrt_llm/cutlass_extensions/include/cutlass_extensions/epilogue/fusion/sm90_visitor_scatter.hpp:0-0
Timestamp: 2025-08-08T05:10:38.906Z
Learning: The ScaledAccPerRowBiasPerColScaleScatter fusion in CUTLASS extensions (cpp/tensorrt_llm/cutlass_extensions/include/cutlass_extensions/epilogue/fusion/sm90_visitor_scatter.hpp) is specifically designed for per-column scaling factors only, so it uses a fixed Stride<_0,_1,int64_t> rather than conditional stride logic.
Learnt from: sklevtsov-nvidia
Repo: NVIDIA/TensorRT-LLM PR: 3294
File: cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu:1198-1209
Timestamp: 2025-08-08T22:03:40.707Z
Learning: In the CUTLASS MoE kernels (cpp/tensorrt_llm/cutlass_extensions), when `layout_info.fusion` is set to `TmaWarpSpecializedGroupedGemmInput::EpilogueFusion::FINALIZE`, the `router_scales` parameter must be non-null by design. The fused finalize kernel epilogue does not perform nullptr checks and requires valid router scales to function correctly. This is an implicit contract that callers must satisfy when enabling the FINALIZE fusion mode.
📚 Learning: 2025-08-09T20:57:04.084Z
Learnt from: sklevtsov-nvidia
Repo: NVIDIA/TensorRT-LLM PR: 3294
File: cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_gemm_tma_warp_specialized_input.cu:118-127
Timestamp: 2025-08-09T20:57:04.084Z
Learning: In the CUTLASS MoE finalize fusion implementation (cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_gemm_tma_warp_specialized_input.cu), when setting `fused_finalize_epilogue.stride_final_output` with shape `(hidden_size, num_output_tokens, 1)`, the `num_rows_in_final_output` should be set to `num_output_tokens` (not `hidden_size`) because of a swap+transpose operation that maps rows of the output tensor to `hidden_size` and columns to `num_output_tokens`.
Applied to files:
tensorrt_llm/_torch/modules/fused_moe/fused_moe_cute_dsl.pytensorrt_llm/_torch/custom_ops/cute_dsl_custom_ops.py
📚 Learning: 2025-08-08T22:03:40.707Z
Learnt from: sklevtsov-nvidia
Repo: NVIDIA/TensorRT-LLM PR: 3294
File: cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu:1198-1209
Timestamp: 2025-08-08T22:03:40.707Z
Learning: In the CUTLASS MoE kernels (cpp/tensorrt_llm/cutlass_extensions), when `layout_info.fusion` is set to `TmaWarpSpecializedGroupedGemmInput::EpilogueFusion::FINALIZE`, the `router_scales` parameter must be non-null by design. The fused finalize kernel epilogue does not perform nullptr checks and requires valid router scales to function correctly. This is an implicit contract that callers must satisfy when enabling the FINALIZE fusion mode.
Applied to files:
tensorrt_llm/_torch/modules/fused_moe/fused_moe_cute_dsl.py
📚 Learning: 2025-08-14T23:23:27.449Z
Learnt from: djns99
Repo: NVIDIA/TensorRT-LLM PR: 6915
File: cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu:4010-4012
Timestamp: 2025-08-14T23:23:27.449Z
Learning: For MOE (Mixture of Experts) code reviews in TensorRT-LLM, avoid repeatedly suggesting finalize fusion validation checks and safety assertions. The user djns99 has indicated these suggestions are repetitive and unwanted across multiple MOE-related changes.
Applied to files:
tensorrt_llm/_torch/modules/fused_moe/fused_moe_cute_dsl.py
📚 Learning: 2025-08-21T02:39:12.009Z
Learnt from: djns99
Repo: NVIDIA/TensorRT-LLM PR: 7104
File: cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu:1475-1480
Timestamp: 2025-08-21T02:39:12.009Z
Learning: The min latency mode functionality in TensorRT-LLM MOE kernels (cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu) is deprecated and no longer being maintained/updated, as confirmed by djns99. Bug reports and optimization suggestions for the computeStridesTmaWarpSpecializedLowLatencyKernel and related min latency code paths should be deprioritized.
Applied to files:
tensorrt_llm/_torch/modules/fused_moe/fused_moe_cute_dsl.py
📚 Learning: 2025-08-19T03:35:20.866Z
Learnt from: djns99
Repo: NVIDIA/TensorRT-LLM PR: 6915
File: cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu:4616-4626
Timestamp: 2025-08-19T03:35:20.866Z
Learning: In the MOE profiler TMA workspace preparation (cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu), the overlapping of TMA WS regions for NONE and FINALIZE variants is deliberate design to save memory space, as confirmed by djns99. The comment "reuse the same pointers to save space" reflects this intentional behavior.
Applied to files:
tensorrt_llm/_torch/modules/fused_moe/fused_moe_cute_dsl.py
📚 Learning: 2025-08-20T07:43:36.447Z
Learnt from: ChristinaZ
Repo: NVIDIA/TensorRT-LLM PR: 7068
File: cpp/tensorrt_llm/kernels/moeTopKFuncs.cuh:169-172
Timestamp: 2025-08-20T07:43:36.447Z
Learning: In TensorRT-LLM MOE kernels, when processing up to 128 experts across 32 threads, each thread handles at most 4 experts (N < 5 constraint), where N represents candidates per thread rather than total system capacity.
Applied to files:
tensorrt_llm/_torch/modules/fused_moe/fused_moe_cute_dsl.pytensorrt_llm/_torch/custom_ops/cute_dsl_custom_ops.py
📚 Learning: 2025-09-23T14:58:05.372Z
Learnt from: nv-lschneider
Repo: NVIDIA/TensorRT-LLM PR: 7910
File: cpp/tensorrt_llm/kernels/nccl_device/config.cu:42-49
Timestamp: 2025-09-23T14:58:05.372Z
Learning: In TensorRT-LLM NCCL device kernels (cpp/tensorrt_llm/kernels/nccl_device/), the token partitioning intentionally uses ceil-like distribution (same token_per_rank for all ranks) to ensure all ranks launch the same number of blocks. This is required for optimal NCCL device API barrier performance, even though it may launch extra blocks for non-existent tokens on later ranks. Runtime bounds checking in the kernel (blockID validation) handles the overshoot cases.
Applied to files:
tensorrt_llm/_torch/modules/fused_moe/fused_moe_cute_dsl.py
📚 Learning: 2025-09-19T21:28:13.751Z
Learnt from: jhaotingc
Repo: NVIDIA/TensorRT-LLM PR: 7856
File: cpp/tensorrt_llm/thop/fp8BlockScaleMoe.cpp:159-166
Timestamp: 2025-09-19T21:28:13.751Z
Learning: In TensorRT-LLM blockScaleMoe routing (cpp/tensorrt_llm/kernels/trtllmGenKernels/blockScaleMoe/runner.cu), the DeepSeek routing method performs reinterpret_cast<float*>(routingLogits) at line 89, which could cause issues if routing_logits are BF16. However, Qwen3-FP8 models use RenormalizeNaive routing method and are not affected by this dtype casting issue.
Applied to files:
tensorrt_llm/_torch/modules/fused_moe/fused_moe_cute_dsl.py
📚 Learning: 2025-09-23T15:12:38.312Z
Learnt from: nv-lschneider
Repo: NVIDIA/TensorRT-LLM PR: 7910
File: cpp/tensorrt_llm/thop/allreduceOp.cpp:352-446
Timestamp: 2025-09-23T15:12:38.312Z
Learning: In TensorRT-LLM NCCL device allreduce implementation (cpp/tensorrt_llm/thop/allreduceOp.cpp), the goto pattern in runNCCLAllReduceDeviceFusion is intentionally used for future extensibility, allowing multiple switch cases to fallback to the default handler. While not aesthetically ideal, this pattern supports adding more fusion cases later that can reuse the same fallback logic.
Applied to files:
tensorrt_llm/_torch/custom_ops/cute_dsl_custom_ops.py
🧬 Code graph analysis (2)
tensorrt_llm/_torch/modules/fused_moe/fused_moe_cute_dsl.py (1)
tensorrt_llm/_torch/custom_ops/cute_dsl_custom_ops.py (1)
cute_dsl_nvfp4_gather_grouped_gemm_swiglu_blackwell(1949-1985)
tensorrt_llm/_torch/custom_ops/cute_dsl_custom_ops.py (2)
tensorrt_llm/_torch/utils.py (3)
fp4_scale_infer_shape(286-291)fp4_unswizzled_scale_infer_shape(294-300)_(227-233)tensorrt_llm/_torch/cute_dsl_kernels/blackwell/blockscaled_contiguous_grouped_gemm_swiglu_fusion.py (1)
can_implement(2525-2607)
🪛 Ruff (0.14.7)
tensorrt_llm/_torch/utils.py
297-297: Unpacked variable out_shape is never used
Prefix it with an underscore or any other dummy variable pattern
(RUF059)
tests/unittest/_torch/thop/parallel/test_cute_dsl_moe.py
703-703: zip() without an explicit strict= parameter
Add explicit value for parameter strict=
(B905)
tensorrt_llm/_torch/custom_ops/cute_dsl_custom_ops.py
160-160: Loop control variable i not used within loop body
Rename unused i to _i
(B007)
1666-1666: Mutable class attributes should be annotated with typing.ClassVar
(RUF012)
1667-1667: Mutable class attributes should be annotated with typing.ClassVar
(RUF012)
1682-1684: Avoid specifying long messages outside the exception class
(TRY003)
1689-1691: Avoid specifying long messages outside the exception class
(TRY003)
1706-1706: Unused method argument: profile
(ARG002)
1707-1707: Unused method argument: kwargs
(ARG002)
1709-1709: Unpacked variable a_sf is never used
Prefix it with an underscore or any other dummy variable pattern
(RUF059)
1709-1709: Unpacked variable b_sf is never used
Prefix it with an underscore or any other dummy variable pattern
(RUF059)
1709-1709: Unpacked variable alpha is never used
Prefix it with an underscore or any other dummy variable pattern
(RUF059)
1709-1709: Unpacked variable tile_idx_to_group_idx is never used
Prefix it with an underscore or any other dummy variable pattern
(RUF059)
1709-1709: Unpacked variable tile_idx_to_mn_limit is never used
Prefix it with an underscore or any other dummy variable pattern
(RUF059)
1713-1713: Ambiguous variable name: l
(E741)
1723-1723: Avoid specifying long messages outside the exception class
(TRY003)
1798-1798: Ambiguous variable name: l
(E741)
1993-1993: Unused function argument: weight_scale
(ARG001)
1994-1994: Unused function argument: alpha
(ARG001)
1995-1995: Unused function argument: tile_idx_to_group_idx
(ARG001)
1996-1996: Unused function argument: tile_idx_to_mn_limit
(ARG001)
1998-1998: Unused function argument: num_non_exiting_tiles
(ARG001)
1999-1999: Unused function argument: global_sf
(ARG001)
2000-2000: Unused function argument: num_experts
(ARG001)
2001-2001: Unused function argument: top_k
(ARG001)
2002-2002: Unused function argument: num_local_experts
(ARG001)
2003-2003: Unused function argument: local_expert_offset
(ARG001)
2004-2004: Unused function argument: tile_size
(ARG001)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Pre-commit Check
🔇 Additional comments (2)
.pre-commit-config.yaml (1)
1435-1442: Codespell ignore-list update is fineAdding
subtilesto the-Lignore list is a benign configuration tweak and won’t affect runtime behavior.tensorrt_llm/_torch/modules/fused_moe/fused_moe_cute_dsl.py (1)
377-404: Gather-based fused path wiring looks consistent; please double-check scale/layout expectationsThe new call to
cute_dsl_nvfp4_gather_grouped_gemm_swiglu_blackwellis wired consistently with the custom op signature: you pass FP4-packed activations/weights, unswizzled input scales (x_sf), interleaved FC1 weights,tile_idx_to_group_idx,tile_idx_to_mn_limit, andpermuted_idx_to_expanded_idxastoken_id_mapping, plusnum_non_exiting_tilesandfc2_input_scaleasglobal_sf. The subsequent finalize path reuses the sametile_idx_to_mn_limitandpermuted_idx_to_expanded_idx, preserving the original unpermute semantics.Given the subtle layout contracts in the NVFP4 path, please verify at call sites that:
x_sfis always in the unswizzled layout expected by the gather kernel when viewed astorch.uint8, including after the optional DP allgather.self.fc2_input_scalehas the same shape (scalar or length-1 tensor) that the kernel and tests expect forglobal_sf.If both hold, the integration should behave as intended.
|
/bot run --disable-fail-fast |
|
PR_Github #27122 [ run ] triggered by Bot. Commit: |
|
PR_Github #27122 [ run ] completed with state |
|
/bot run --disable-fail-fast |
|
PR_Github #27170 [ run ] triggered by Bot. Commit: |
|
PR_Github #27170 [ run ] completed with state |
Signed-off-by: Zongfei Jing <[email protected]>
Signed-off-by: Zongfei Jing <[email protected]>
Signed-off-by: Zongfei Jing <[email protected]>
Signed-off-by: Zongfei Jing <[email protected]>
Signed-off-by: Zongfei Jing <[email protected]>
Signed-off-by: Zongfei Jing <[email protected]>
Signed-off-by: Zongfei Jing <[email protected]>
Signed-off-by: Zongfei Jing <[email protected]>
QiJune
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
|
PR_Github #27244 [ run ] completed with state |
|
/bot run --disable-fail-fast |
|
PR_Github #27304 [ run ] triggered by Bot. Commit: |
|
PR_Github #27304 [ run ] completed with state |
|
Hold on this PR, waiting for perf analysis. |
Signed-off-by: Zongfei Jing <[email protected]>
|
/bot run --disable-fail-fast |
|
PR_Github #27524 [ run ] triggered by Bot. Commit: |
|
PR_Github #27524 [ run ] completed with state |
|
/bot run --disable-fail-fast |
|
PR_Github #27570 [ run ] triggered by Bot. Commit: |
|
/bot run --disable-fail-fast |
|
PR_Github #27757 [ run ] triggered by Bot. Commit: |
|
PR_Github #27757 [ run ] completed with state |
Signed-off-by: Zongfei Jing <[email protected]>
Signed-off-by: Zongfei Jing <[email protected]>
Signed-off-by: Zongfei Jing <[email protected]>
Signed-off-by: Zongfei Jing <[email protected]>
Summary by CodeRabbit
New Features
Tests
Chores
✏️ Tip: You can customize this high-level summary in your review settings.
Description
Test Coverage
PR Checklist
Please review the following before submitting your PR:
PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.
PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.
Test cases are provided for new code paths (see test instructions)
Any new dependencies have been scanned for license and vulnerabilities
CODEOWNERS updated if ownership changes
Documentation updated as needed
Update tava architecture diagram if there is a significant design change in PR.
The reviewers assigned automatically/manually are appropriate for the PR.
Please check this after reviewing the above items as appropriate for this PR.
GitHub Bot Help
/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...Provide a user friendly way for developers to interact with a Jenkins server.
Run
/bot [-h|--help]to print this help message.See details below for each supported subcommand.
Details
run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]Launch build/test pipelines. All previously running jobs will be killed.
--reuse-test (optional)pipeline-id(OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.--disable-reuse-test(OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.--disable-fail-fast(OPTIONAL) : Disable fail fast on build/tests/infra failures.--skip-test(OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.--stage-list "A10-PyTorch-1, xxx"(OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.--gpu-type "A30, H100_PCIe"(OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.--test-backend "pytorch, cpp"(OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.--only-multi-gpu-test(OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.--disable-multi-gpu-test(OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.--add-multi-gpu-test(OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.--post-merge(OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx"(OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".--detailed-log(OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.--debug(OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in thestage-listparameter to access the appropriate container environment. Note: Does NOT update GitHub check status.For guidance on mapping tests to stage names, see
docs/source/reference/ci-overview.mdand the
scripts/test_to_stage_mapping.pyhelper.kill
killKill all running builds associated with pull request.
skip
skip --comment COMMENTSkip testing for latest commit on pull request.
--comment "Reason for skipping build/test"is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.reuse-pipeline
reuse-pipelineReuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.