Skip to content

Conversation

@bobboli
Copy link
Collaborator

@bobboli bobboli commented Nov 24, 2025

Summary by CodeRabbit

  • Refactor

    • Refactored Mixture of Experts all-to-all communication method selection from string-based backend flags to dedicated method type enumerations, improving code clarity and consistency.
    • Removed public moe_alltoall_backend property; method selection now uses dedicated enum types.
    • Updated default behavior for all-to-all method selection.
  • Tests

    • Updated tests to cover new all-to-all method type variants.

✏️ Tip: You can customize this high-level summary in your review settings.

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@bobboli bobboli requested a review from a team as a code owner November 24, 2025 04:00
@bobboli bobboli requested a review from HuiGao-NV November 24, 2025 04:00
@bobboli
Copy link
Collaborator Author

bobboli commented Nov 24, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25488 [ run ] triggered by Bot. Commit: eee5304

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Nov 24, 2025

📝 Walkthrough

Walkthrough

This change refactors the MOE all-to-all backend selection from string-based checks to an enumerated type system. The MNNVL enum value is replaced with NVLinkOneSided and NVLinkTwoSided, the moe_alltoall_backend cached property is removed, and all conditional logic across multiple implementations is updated to use AlltoallMethodType enum comparisons instead of backend string checks.

Changes

Cohort / File(s) Change Summary
Enum definition
tensorrt_llm/_torch/modules/fused_moe/interface.py
Refactored AlltoallMethodType enum: removed MNNVL = 1, added NVLinkOneSided = 1 and NVLinkTwoSided = 2, renumbered DeepEP to 3 and DeepEPLowLatency to 4
Cutlass implementation
tensorrt_llm/_torch/modules/fused_moe/fused_moe_cutlass.py
Replaced moe_alltoall_backend string checks with AlltoallMethodType enum comparisons; removed cached_property; updated select_alltoall_method_type to default to NVLinkOneSided; refactored control flow for alltoall enablement, workspace selection, and payload handling to use enum-based routing
TRTLLMGen implementation
tensorrt_llm/_torch/modules/fused_moe/fused_moe_trtllm_gen.py
Replaced moe_alltoall_backend string checks with AlltoallMethodType enum comparisons; removed cached_property; updated default selection logic from MNNVL to NVLinkOneSided; refactored initialization, quantization, and forward logic to consistently route on alltoall_method_type
WideEP implementation
tensorrt_llm/_torch/modules/fused_moe/fused_moe_wide_ep.py
Replaced MNNVL references with NVLinkTwoSided across feasibility checks, dispatch paths, and post-quantization handling; removed moe_alltoall_backend cached_property; removed TRTLLM_MOE_DISABLE_ALLTOALLV environment variable gate
Test updates
tests/unittest/_torch/modules/test_fused_moe.py
Updated parametrization to use NVLinkOneSided and NVLinkTwoSided instead of MNNVL; relaxed output-shape conditions to apply sum(dim=1) for all method types when output.ndim == 3

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

  • Enum value reordering: Verify enum value changes (MNNVL = 1 → NVLinkOneSided = 1; DeepEP = 2 → 3; DeepEPLowLatency = 3 → 4) do not break serialization, deserialization, or stored configurations
  • Default behavior change: Confirm NVLinkOneSided default (replacing MNNVL) maintains expected behavior and does not introduce performance regressions
  • Removed moe_alltoall_backend property: Ensure no external code or tests depend on this public cached_property
  • TRTLLM_MOE_DISABLE_ALLTOALLV removal: Verify removal of this environment variable gate is intentional and does not break existing deployments that may rely on it
  • Enum conditional logic: Check all AlltoallMethodType comparisons in each implementation (Cutlass, TRTLLMGen, WideEP) correctly handle the new enum variants and default cases

Pre-merge checks and finishing touches

❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Description check ⚠️ Warning The PR description is essentially empty—it only contains the repository template and CodeRabbit prompts with no actual implementation details, rationale, or test coverage information. Add a concrete description explaining the refactoring objectives (replacing MNNVL with NVLinkOneSided/NVLinkTwoSided), affected modules, and relevant test coverage.
Docstring Coverage ⚠️ Warning Docstring coverage is 30.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly identifies the main change: refactoring the AlltoallMethodType enum, which aligns with the primary modifications across multiple files.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Tip

📝 Customizable high-level summaries are now available in beta!

You can now customize how CodeRabbit generates the high-level summary in your pull requests — including its content, structure, tone, and formatting.

  • Provide your own instructions using the high_level_summary_instructions setting.
  • Format the summary however you like (bullet lists, tables, multi-section layouts, contributor stats, etc.).
  • Use high_level_summary_in_walkthrough to move the summary from the description to the walkthrough section.

Example instruction:

"Divide the high-level summary into five sections:

  1. 📝 Description — Summarize the main change in 50–60 words, explaining what was done.
  2. 📓 References — List relevant issues, discussions, documentation, or related PRs.
  3. 📦 Dependencies & Requirements — Mention any new/updated dependencies, environment variable changes, or configuration updates.
  4. 📊 Contributor Summary — Include a Markdown table showing contributions:
    | Contributor | Lines Added | Lines Removed | Files Changed |
  5. ✔️ Additional Notes — Add any extra reviewer context.
    Keep each section concise (under 200 words) and use bullet or numbered lists for clarity."

Note: This feature is currently in beta for Pro-tier users, and pricing will be announced later.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
tensorrt_llm/_torch/modules/fused_moe/fused_moe_trtllm_gen.py (1)

163-172: Minor: error message references the wrong class name

The NotImplementedError for DeepEP/DeepEPLowLatency currently says "not supported for CutlassFusedMoE yet" even though this is inside TRTLLMGenFusedMoE.select_alltoall_method_type.

A tiny wording fix will avoid confusion:

-                raise NotImplementedError(
-                    "DeepEP and DeepEPLowLatency are not supported for CutlassFusedMoE yet"
-                )
+                raise NotImplementedError(
+                    "DeepEP and DeepEPLowLatency are not supported for TRTLLMGenFusedMoE yet"
+                )
🧹 Nitpick comments (1)
tensorrt_llm/_torch/modules/fused_moe/fused_moe_cutlass.py (1)

696-700: Chunked DP overlap is disabled for NVLink alltoall (by design?)

For the multi‑chunk path, the overlap between compute and reducescatter_or_allreduce is now skipped when alltoall_method_type is NVLinkOneSided or NVLinkTwoSided, while it remains enabled for the non‑alltoall case.

That’s reasonable given that alltoall already dominates communication cost, but if you later add more alltoall types it may be cleaner to express this as a dedicated flag (e.g., supports_chunk_overlap) rather than hard‑coding specific enum values.

Also applies to: 717-721

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c045e35 and eee5304.

📒 Files selected for processing (5)
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_cutlass.py (12 hunks)
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_trtllm_gen.py (10 hunks)
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_wide_ep.py (9 hunks)
  • tensorrt_llm/_torch/modules/fused_moe/interface.py (1 hunks)
  • tests/unittest/_torch/modules/test_fused_moe.py (4 hunks)
🧰 Additional context used
🧠 Learnings (13)
📓 Common learnings
Learnt from: nzmora-nvidia
Repo: NVIDIA/TensorRT-LLM PR: 9163
File: tensorrt_llm/_torch/auto_deploy/custom_ops/quant.py:107-113
Timestamp: 2025-11-14T11:22:03.729Z
Learning: In TensorRT-LLM AutoDeploy custom ops, when adding hardware capability checks to select between kernel implementations (e.g., cuBLAS vs. CUDA kernel), use descriptive variable names that identify the specific GPU architectures or families being targeted (e.g., `is_blackwell_geforce_or_ada`) rather than generic names like `enable_cuda_core`. This makes it clear that the code is selecting an implementation path based on hardware capabilities, not enabling/disabling hardware features.
📚 Learning: 2025-08-14T06:36:40.701Z
Learnt from: timlee0212
Repo: NVIDIA/TensorRT-LLM PR: 6886
File: tensorrt_llm/_torch/models/modeling_deepseekv3.py:0-0
Timestamp: 2025-08-14T06:36:40.701Z
Learning: In DeepSeek V3 model (tensorrt_llm/_torch/models/modeling_deepseekv3.py), the disagreement between AllReduce.__init__ guard and _compute_mlp_tp_size logic for MNNVL usage is expected by design. The AllReduce component and MLP TP-size computation intentionally use different criteria for MNNVL availability decisions.

Applied to files:

  • tensorrt_llm/_torch/modules/fused_moe/interface.py
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_wide_ep.py
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_trtllm_gen.py
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_cutlass.py
📚 Learning: 2025-08-21T02:39:12.009Z
Learnt from: djns99
Repo: NVIDIA/TensorRT-LLM PR: 7104
File: cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu:1475-1480
Timestamp: 2025-08-21T02:39:12.009Z
Learning: The min latency mode functionality in TensorRT-LLM MOE kernels (cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu) is deprecated and no longer being maintained/updated, as confirmed by djns99. Bug reports and optimization suggestions for the computeStridesTmaWarpSpecializedLowLatencyKernel and related min latency code paths should be deprioritized.

Applied to files:

  • tensorrt_llm/_torch/modules/fused_moe/interface.py
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_wide_ep.py
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_trtllm_gen.py
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_cutlass.py
📚 Learning: 2025-08-14T23:23:27.449Z
Learnt from: djns99
Repo: NVIDIA/TensorRT-LLM PR: 6915
File: cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu:4010-4012
Timestamp: 2025-08-14T23:23:27.449Z
Learning: For MOE (Mixture of Experts) code reviews in TensorRT-LLM, avoid repeatedly suggesting finalize fusion validation checks and safety assertions. The user djns99 has indicated these suggestions are repetitive and unwanted across multiple MOE-related changes.

Applied to files:

  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_wide_ep.py
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_trtllm_gen.py
  • tests/unittest/_torch/modules/test_fused_moe.py
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_cutlass.py
📚 Learning: 2025-08-19T03:35:20.866Z
Learnt from: djns99
Repo: NVIDIA/TensorRT-LLM PR: 6915
File: cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu:4616-4626
Timestamp: 2025-08-19T03:35:20.866Z
Learning: In the MOE profiler TMA workspace preparation (cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu), the overlapping of TMA WS regions for NONE and FINALIZE variants is deliberate design to save memory space, as confirmed by djns99. The comment "reuse the same pointers to save space" reflects this intentional behavior.

Applied to files:

  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_wide_ep.py
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_trtllm_gen.py
📚 Learning: 2025-09-23T15:12:38.312Z
Learnt from: nv-lschneider
Repo: NVIDIA/TensorRT-LLM PR: 7910
File: cpp/tensorrt_llm/thop/allreduceOp.cpp:352-446
Timestamp: 2025-09-23T15:12:38.312Z
Learning: In TensorRT-LLM NCCL device allreduce implementation (cpp/tensorrt_llm/thop/allreduceOp.cpp), the goto pattern in runNCCLAllReduceDeviceFusion is intentionally used for future extensibility, allowing multiple switch cases to fallback to the default handler. While not aesthetically ideal, this pattern supports adding more fusion cases later that can reuse the same fallback logic.

Applied to files:

  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_wide_ep.py
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_trtllm_gen.py
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_cutlass.py
📚 Learning: 2025-08-09T20:57:04.084Z
Learnt from: sklevtsov-nvidia
Repo: NVIDIA/TensorRT-LLM PR: 3294
File: cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_gemm_tma_warp_specialized_input.cu:118-127
Timestamp: 2025-08-09T20:57:04.084Z
Learning: In the CUTLASS MoE finalize fusion implementation (cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_gemm_tma_warp_specialized_input.cu), when setting `fused_finalize_epilogue.stride_final_output` with shape `(hidden_size, num_output_tokens, 1)`, the `num_rows_in_final_output` should be set to `num_output_tokens` (not `hidden_size`) because of a swap+transpose operation that maps rows of the output tensor to `hidden_size` and columns to `num_output_tokens`.

Applied to files:

  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_wide_ep.py
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_trtllm_gen.py
  • tests/unittest/_torch/modules/test_fused_moe.py
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_cutlass.py
📚 Learning: 2025-08-19T12:45:11.997Z
Learnt from: amitz-nv
Repo: NVIDIA/TensorRT-LLM PR: 7033
File: tensorrt_llm/_torch/pyexecutor/model_engine.py:0-0
Timestamp: 2025-08-19T12:45:11.997Z
Learning: In tensorrt_llm/_torch/pyexecutor/model_engine.py, DoRA (Delta Orthogonal Rank Adaptation) functionality was removed from the PyTorch flow to eliminate issues with inverted DoRA detection logic. The original is_dora condition was checking if scaling_vec_pointer == 0, which was potentially incorrect.

Applied to files:

  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_wide_ep.py
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_trtllm_gen.py
📚 Learning: 2025-08-08T22:03:40.707Z
Learnt from: sklevtsov-nvidia
Repo: NVIDIA/TensorRT-LLM PR: 3294
File: cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu:1198-1209
Timestamp: 2025-08-08T22:03:40.707Z
Learning: In the CUTLASS MoE kernels (cpp/tensorrt_llm/cutlass_extensions), when `layout_info.fusion` is set to `TmaWarpSpecializedGroupedGemmInput::EpilogueFusion::FINALIZE`, the `router_scales` parameter must be non-null by design. The fused finalize kernel epilogue does not perform nullptr checks and requires valid router scales to function correctly. This is an implicit contract that callers must satisfy when enabling the FINALIZE fusion mode.

Applied to files:

  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_wide_ep.py
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_cutlass.py
📚 Learning: 2025-09-19T21:28:13.751Z
Learnt from: jhaotingc
Repo: NVIDIA/TensorRT-LLM PR: 7856
File: cpp/tensorrt_llm/thop/fp8BlockScaleMoe.cpp:159-166
Timestamp: 2025-09-19T21:28:13.751Z
Learning: In TensorRT-LLM blockScaleMoe routing (cpp/tensorrt_llm/kernels/trtllmGenKernels/blockScaleMoe/runner.cu), the DeepSeek routing method performs reinterpret_cast<float*>(routingLogits) at line 89, which could cause issues if routing_logits are BF16. However, Qwen3-FP8 models use RenormalizeNaive routing method and are not affected by this dtype casting issue.

Applied to files:

  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_wide_ep.py
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_trtllm_gen.py
📚 Learning: 2025-11-14T11:22:03.729Z
Learnt from: nzmora-nvidia
Repo: NVIDIA/TensorRT-LLM PR: 9163
File: tensorrt_llm/_torch/auto_deploy/custom_ops/quant.py:107-113
Timestamp: 2025-11-14T11:22:03.729Z
Learning: In TensorRT-LLM AutoDeploy custom ops, when adding hardware capability checks to select between kernel implementations (e.g., cuBLAS vs. CUDA kernel), use descriptive variable names that identify the specific GPU architectures or families being targeted (e.g., `is_blackwell_geforce_or_ada`) rather than generic names like `enable_cuda_core`. This makes it clear that the code is selecting an implementation path based on hardware capabilities, not enabling/disabling hardware features.

Applied to files:

  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_trtllm_gen.py
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_cutlass.py
📚 Learning: 2025-08-20T07:43:36.447Z
Learnt from: ChristinaZ
Repo: NVIDIA/TensorRT-LLM PR: 7068
File: cpp/tensorrt_llm/kernels/moeTopKFuncs.cuh:169-172
Timestamp: 2025-08-20T07:43:36.447Z
Learning: In TensorRT-LLM MOE kernels, when processing up to 128 experts across 32 threads, each thread handles at most 4 experts (N < 5 constraint), where N represents candidates per thread rather than total system capacity.

Applied to files:

  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_trtllm_gen.py
📚 Learning: 2025-10-13T19:45:03.518Z
Learnt from: nv-lschneider
Repo: NVIDIA/TensorRT-LLM PR: 7910
File: tests/unittest/_torch/multi_gpu/test_nccl_device.py:138-149
Timestamp: 2025-10-13T19:45:03.518Z
Learning: In test_nccl_device.py, the NCCL device AllReduce implementation compares the entire residual tensor on each rank, unlike the UB implementation which compares per-rank chunks. The residual chunking calculations in the test are intentionally overridden to reflect this design difference.

Applied to files:

  • tests/unittest/_torch/modules/test_fused_moe.py
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_cutlass.py
🧬 Code graph analysis (4)
tensorrt_llm/_torch/modules/fused_moe/fused_moe_wide_ep.py (1)
tensorrt_llm/_torch/modules/fused_moe/interface.py (1)
  • AlltoallMethodType (26-36)
tensorrt_llm/_torch/modules/fused_moe/fused_moe_trtllm_gen.py (1)
tensorrt_llm/_torch/modules/fused_moe/interface.py (1)
  • AlltoallMethodType (26-36)
tests/unittest/_torch/modules/test_fused_moe.py (1)
tensorrt_llm/_torch/modules/fused_moe/interface.py (1)
  • AlltoallMethodType (26-36)
tensorrt_llm/_torch/modules/fused_moe/fused_moe_cutlass.py (2)
tensorrt_llm/_torch/modules/fused_moe/interface.py (1)
  • AlltoallMethodType (26-36)
tensorrt_llm/_torch/distributed/moe_alltoall.py (1)
  • MoeAlltoAll (26-235)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (11)
tensorrt_llm/_torch/modules/fused_moe/interface.py (1)

25-36: Enum value changes – confirm there are no external numeric dependencies

AlltoallMethodType now renumbers members (NVLinkOneSided=1, NVLinkTwoSided=2, DeepEP=3, DeepEPLowLatency=4). This is fine as long as all call sites use the symbolic names or env strings, but it would break any persisted/int-based contracts (e.g. configs, logs parsed as raw ints, cross-language bindings). Please double‑check that nothing outside this repo relies on the previous numeric values.

tests/unittest/_torch/modules/test_fused_moe.py (2)

210-215: Alltoall test parametrization correctly tracks new enum values

The parametrized alltoall_method_type sets now cover NVLinkOneSided, NVLinkTwoSided, DeepEP, DeepEPLowLatency, and NotEnabled where appropriate, matching the new AlltoallMethodType contract used in the implementations. This keeps coverage in sync with the refactor.

Also applies to: 323-329, 685-689


305-307: Shape‑based reduction for 3D outputs looks appropriate

Switching to if output.ndim == 3: output = output.sum(dim=1) decouples the test from specific alltoall methods and relies only on the presence of an extra top_k dimension. That matches how the WideEP/Cutlass paths structure their outputs today and keeps the test robust as new alltoall methods are added.

If any backend ever returns 3D output for reasons other than a top_k dimension, you may want a more explicit condition here.

tensorrt_llm/_torch/modules/fused_moe/fused_moe_cutlass.py (3)

145-168: NVLinkTwoSided vs NVLinkOneSided initialization looks consistent

The split between NVLinkTwoSided (Mnnvl workspaces) and NVLinkOneSided (Python MoeAlltoAll with num_experts=self.num_slots and invalid‑expert id self.num_slots) matches the WideEPMoE usage pattern and the MoeAlltoAll contract. Workspace sizing via TRTLLM_MOE_A2A_WORKSPACE_MB is clear and bounded per rank.


233-237: Stronger default to NVLinkOneSided – consider external expectations

select_alltoall_method_type now always returns NVLinkOneSided when DP>1, moe_tp_size==1, and MnnvlMemory.supports_mnnvl(), instead of honoring previous heuristics like ep_size > top_k or an explicit disable flag. This is likely desired for performance, but it does change behavior for existing deployments that might have relied on the old heuristic or TRTLLM_MOE_DISABLE_ALLTOALLV.

If you have workloads that were intentionally running without alltoall despite MNNVL being available, you may want to:

  • Document the new default, and/or
  • Recommend TRTLLM_FORCE_ALLTOALL_METHOD=NotEnabled where disabling is still needed.

Also applies to: 239-243


314-317: Load‑balancer stats and NVLink alltoall flows are wired correctly

  • Using ignore_allreduce = (alltoall_method_type == NVLinkTwoSided) delegates global statistics aggregation to the NVLink prepare/combine path, avoiding double allreduce.
  • The NVLinkTwoSided path prepares with mnnvl_moe_alltoallv_prepare_without_allgather, updates global stats via update_statistic_with_gathered_statistic, and uses memset_expert_ids(..., invalid_id=self.num_slots) for padding, which matches the existing MNNVL semantics.
  • The NVLinkOneSided path uses MoeAlltoAll.dispatch/combine with consistent payload ordering and view/reshape logic, and correctly reuses a workspace‑backed output when available.

Overall, the control flow and tensor reshaping look sound.

Also applies to: 427-460, 571-597

tensorrt_llm/_torch/modules/fused_moe/fused_moe_wide_ep.py (3)

113-153: WideEPMoE alltoall initialization for NVLink/DeepEP variants looks correct

The constructor cleanly maps:

  • NVLinkTwoSidedMnnvlMemory.initialize() plus MnnvlMoe.get_moe_workspaces/get_moe_prepare_workspace.
  • DeepEP → shared buffer_pool with reserve(hidden_size, dtype).
  • DeepEPLowLatency → low‑latency buffer with reserve(self.deep_ep_max_num_tokens, hidden_size, self.num_slots) and NVSHMEM_QP_DEPTH tuning.

Unknown/NotEnabled method types are rejected early. This is consistent with the intended separation between NVLink and DeepEP backends.


185-223: Alltoall selection and gating – behavior changes look intentional, please confirm

  • select_alltoall_method_type now:

    • Requires enable_attention_dp and mapping.tp_size > 1.
    • Disables alltoall when moe_ep_size <= top_k.
    • Prefers NVLinkTwoSided if MnnvlMemory.supports_mnnvl().
    • Otherwise, optionally selects DeepEP/DeepEPLowLatency based on TRTLLM_CAN_USE_DEEP_EP and BF16.
  • can_use_alltoall always returns True for NVLinkTwoSided, even when multiple MoE chunks are required, while still disabling alltoall under chunking for DeepEP/LowLatency.

  • is_post_quant_all2all_supported now treats NVLinkTwoSided as always supporting post‑quant alltoall, and narrows DeepEP/LowLatency support to specific quant algorithms.

This aligns with the new NVLink‑centric design, but it does slightly broaden the scenarios where NVLinkTwoSided alltoall is used (e.g., with chunking). Please verify this matches the expected capability/perf envelope of the NVLink path.

Also applies to: 245-263, 367-377


400-401: NVLinkTwoSided load‑balancer integration and combine flow look consistent

  • Using ignore_allreduce = (alltoall_method_type == NVLinkTwoSided) delegates global load‑balancer stats to the NVLink prepare path via alltoall_prepare and _load_balancer_update_statistic_with_gathered_statistic, avoiding redundant allreduces.
  • alltoall_prepare / alltoall_dispatch / alltoall_combine wrap the Mnnvl NVLink calls cleanly and centrally, with expert‑id padding handled via memset_expert_ids.
  • The combine branch for NVLinkTwoSided reuses these helpers and respects alltoall_result_do_sum, while DeepEP/LowLatency paths continue to use their respective buffer‑based combine routines.

Overall, the NVLinkTwoSided flow through prepare → dispatch → MoEOp → combine is structured and consistent with the other backends.

Also applies to: 427-433, 463-476, 554-559, 651-687

tensorrt_llm/_torch/modules/fused_moe/fused_moe_trtllm_gen.py (2)

111-119: TRTLLMGenFusedMoE alltoall selection and init mirror Cutlass logic

  • The class now chooses an alltoall_method_type and logs it once, then:

    • For NVLinkTwoSided, initializes Mnnvl workspaces via MnnvlMemory/MnnvlMoe.
    • For NVLinkOneSided, sets up a Python MoeAlltoAll instance with num_experts=self.num_slots and user‑configurable workspace size.
    • Rejects DeepEP/LowLatency as not yet supported.
  • select_alltoall_method_type follows the same pattern as Cutlass: require DP>1, moe_tp_size == 1, MNNVL support, optional TRTLLM_FORCE_ALLTOALL_METHOD override, otherwise default to NVLinkOneSided.

  • enable_alltoall is derived purely from the enum (non‑NotEnabled), which keeps downstream logic simple.

This brings TRTLLMGenFusedMoE in line with the other MoE backends.

Also applies to: 122-138, 151-179, 184-189


339-369: NVLink alltoall + MoeAlltoAll integration in forward path looks sound

  • For post‑quant communication, load‑balancer stats are updated with ignore_allreduce=True when using NVLinkTwoSided, relying on the subsequent Mnnvl gather to provide global statistics.

  • The NVLinkTwoSided branch:

    • Prepares alltoall via mnnvl_moe_alltoallv_prepare_without_allgather, optionally passing LB local stats and feeding back gathered stats.
    • Does a multi‑payload alltoall of [x, x_sf, token_selected_experts, token_final_scales].
    • Uses memset_expert_ids(..., invalid_id=-1) consistent with TRTLLM‑Gen semantics.
  • The NVLinkOneSided branch:

    • Uses MoeAlltoAll.dispatch/combine with payloads [x, (optional x_sf), token_selected_experts, token_final_scales] and invalid‑id -1, and correctly reshapes the 3D recv tensors back to 2D.
    • Optionally obtains a workspace‑backed output tensor via get_combine_payload_tensor_in_workspace when using W4A8_MXFP4_MXFP8, avoiding an extra copy on combine.
  • The final combine block dispatches to either Mnnvl combine or MoeAlltoAll.combine based on the enum, with clear error handling for unsupported types.

Overall, the control flow and tensor reshaping/dtypes for both NVLink variants match the intended semantics.

Also applies to: 350-355, 370-475, 495-502, 761-799

reasonsolo pushed a commit to reasonsolo/TensorRT-LLM that referenced this pull request Nov 24, 2025
@tensorrt-cicd
Copy link
Collaborator

PR_Github #25488 [ run ] completed with state SUCCESS. Commit: eee5304
/LLM/main/L0_MergeRequest_PR pipeline #19299 completed with status: 'FAILURE'

@bobboli
Copy link
Collaborator Author

bobboli commented Nov 24, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25495 [ run ] triggered by Bot. Commit: eee5304

@bobboli
Copy link
Collaborator Author

bobboli commented Nov 24, 2025

/bot kill

@bobboli
Copy link
Collaborator Author

bobboli commented Nov 24, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25507 [ run ] triggered by Bot. Commit: beea101

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25495 [ run ] completed with state ABORTED. Commit: eee5304
LLM/main/L0_MergeRequest_PR #19306 (Blue Ocean) completed with status: ABORTED

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25508 [ kill ] triggered by Bot. Commit: beea101

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25507 [ run ] completed with state ABORTED. Commit: beea101

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25508 [ kill ] completed with state SUCCESS. Commit: beea101
Successfully killed previous jobs for commit beea101

@bobboli
Copy link
Collaborator Author

bobboli commented Nov 24, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25524 [ run ] triggered by Bot. Commit: beea101

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25524 [ run ] completed with state SUCCESS. Commit: beea101
/LLM/main/L0_MergeRequest_PR pipeline #19330 completed with status: 'FAILURE'

@bobboli
Copy link
Collaborator Author

bobboli commented Nov 24, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25549 [ run ] triggered by Bot. Commit: beea101

@bobboli bobboli force-pushed the alltoall_method_refactor branch from 02cb388 to 018a80a Compare November 25, 2025 06:21
@bobboli
Copy link
Collaborator Author

bobboli commented Nov 25, 2025

/bot run

@bobboli bobboli enabled auto-merge (squash) November 25, 2025 06:21
@tensorrt-cicd
Copy link
Collaborator

PR_Github #25665 [ run ] triggered by Bot. Commit: 018a80a

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25665 [ run ] completed with state SUCCESS. Commit: 018a80a
/LLM/main/L0_MergeRequest_PR pipeline #19451 completed with status: 'FAILURE'

@bobboli
Copy link
Collaborator Author

bobboli commented Nov 25, 2025

/bot run --reuse-test

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25725 [ run ] triggered by Bot. Commit: 018a80a

@bobboli bobboli force-pushed the alltoall_method_refactor branch from 018a80a to f85b11f Compare November 25, 2025 10:59
@bobboli
Copy link
Collaborator Author

bobboli commented Nov 25, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25728 [ run ] triggered by Bot. Commit: f85b11f

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25725 [ run ] completed with state ABORTED. Commit: 018a80a
LLM/main/L0_MergeRequest_PR #19507 (Blue Ocean) completed with status: ABORTED

@bobboli
Copy link
Collaborator Author

bobboli commented Nov 25, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25739 [ run ] triggered by Bot. Commit: f85b11f

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25728 [ run ] completed with state ABORTED. Commit: f85b11f
LLM/main/L0_MergeRequest_PR #19510 (Blue Ocean) completed with status: ABORTED

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25739 [ run ] completed with state SUCCESS. Commit: f85b11f
/LLM/main/L0_MergeRequest_PR pipeline #19518 completed with status: 'FAILURE'

@bobboli
Copy link
Collaborator Author

bobboli commented Nov 25, 2025

/bot run --reuse-test

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25769 [ run ] triggered by Bot. Commit: f85b11f

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25769 [ run ] completed with state FAILURE. Commit: f85b11f
/LLM/main/L0_MergeRequest_PR pipeline #19543 completed with status: 'FAILURE'

@bobboli
Copy link
Collaborator Author

bobboli commented Nov 26, 2025

/bot run --reuse-test

2 similar comments
@bobboli
Copy link
Collaborator Author

bobboli commented Nov 26, 2025

/bot run --reuse-test

@bobboli
Copy link
Collaborator Author

bobboli commented Nov 27, 2025

/bot run --reuse-test

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25946 [ run ] triggered by Bot. Commit: f85b11f

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25946 [ run ] completed with state SUCCESS. Commit: f85b11f
/LLM/main/L0_MergeRequest_PR pipeline #19677 completed with status: 'SUCCESS'
Pipeline passed with automatic retried tests. Check the rerun report for details.

@bobboli bobboli merged commit 62b7718 into NVIDIA:main Nov 27, 2025
4 of 5 checks passed
MinaHuai pushed a commit to davidmlw/TensorRT-LLM that referenced this pull request Dec 10, 2025
…VIDIA#8779)

The performance results of some kernels could be easily affected by the warm/cold L2 cache status. To achieve more precise profiling results, the L2 cache is cleared for every execution by the circular buffer method for better benchmarking during autotuning.

Signed-off-by: Yukun He <[email protected]>

[None][infra] Waive failed cases for main branch on 11/25 (NVIDIA#9429)

Signed-off-by: qqiao <[email protected]>

[NVIDIA#8391][chore] test_perf.py to lock clocks read from gpu_configs.yml instead of max freq (NVIDIA#9409)

Signed-off-by: Eran Geva <[email protected]>

[None][ci] Move more test stages to use OCI machines (NVIDIA#9395)

Signed-off-by: Yanchao Lu <[email protected]>
Co-authored-by: Matt Lefebvre <[email protected]>

[None][feat] Improve TRTLLM MoE in small hidden size throughput cases (NVIDIA#9377)

Signed-off-by: Anthony Chang <[email protected]>

[https://nvbugs/5537996][fix] Let KV cache manager block initialization be aware whether it is doing a dry run or not (NVIDIA#9093)

Before this commit, the kv cache manager does the same regardless, which causes a mis-calculation in free memory available to allocate for the KV cache manager, hence causing a crash.

This commit fixes this by letting KV cache manager initialization be aware whether it is doing the dry run or not. If it is a dry run, use the max_tokens setting that is already pre-calculated and filled into kv_cache_config.max_tokens.

Signed-off-by: eopXD <[email protected]>

[https://nvbugs/5667922][fix] Update long context evaluation config (NVIDIA#9426)

Signed-off-by: mni <[email protected]>

[None][fix] Mitigate test timeout issues (NVIDIA#9445)

Signed-off-by: Shixiaowei02 <[email protected]>

[None][chore] Fix trtllm-eval for PyTorchLLM (NVIDIA#9427)

Signed-off-by: Fanrong Li <[email protected]>

[None][feat] Add a parser to layer-wise benchmarks (NVIDIA#9440)

Signed-off-by: Tailing Yuan <[email protected]>

[None][feat] Support custom chat template for tool calling (NVIDIA#9297)

Signed-off-by: Pengyun Lin <[email protected]>

[TRTLLM-8160][feat] Add draft token tree runtime on CDL (NVIDIA#8586)

Signed-off-by: Yue Weng <[email protected]>

[None][ci] waive a test (NVIDIA#9458)

Signed-off-by: Yan Chunwei <[email protected]>

[https://nvbugs/5680905][fix] Relax the MMLU accuracy requirement for DS-v3.2 (NVIDIA#9439)

Signed-off-by: Fanrong Li <[email protected]>

[TRTLLM-8376][feat] top-p optimization (removes redundant softmax) (NVIDIA#9411)

Signed-off-by: ixlmar <[email protected]>

[TRTLLM-9490][feat] use FlashInfer's top_k_sampling_from_probs (NVIDIA#9457)

Signed-off-by: ixlmar <[email protected]>

[https://nvbugs/5647400] [fix] Enlarged the AllReduce workspace size to 64MB. Added AllReduce strategy to AD config. (NVIDIA#9145)

Signed-off-by: Eran Geva <[email protected]>

[TRTLLM-909][feat] Overlap context chunks in pipeline parallel mode (NVIDIA#9308)

Signed-off-by: Robin Kobus <[email protected]>

[None][chore] AutoDeploy add multi stream moe pass to default.yaml (NVIDIA#9430)

Signed-off-by: Suyog Gupta <[email protected]>

[https://nvbugs/5685143][fix] avoid cudaFree overlap with cuda graph (NVIDIA#9438)

Signed-off-by: Chuang Zhu <[email protected]>

[None][chore] Bump version to 1.2.0rc5 (NVIDIA#9455)

Signed-off-by: Yiqing Yan <[email protected]>

[TRTLLM-8936][test] Add disagg and wideep multi-node multi-gpu test cases (NVIDIA#9356)

Signed-off-by: FredricZ-2007 <[email protected]>

[None][ci] move some slow test cases of DGX-B200 to post merge (NVIDIA#9467)

Signed-off-by: junq <[email protected]>

[TRTLLM-9293][feat] Enable partial weight loading to support streaming update weights (NVIDIA#9224)

Signed-off-by: shuyix <[email protected]>

[None][infra] Check in most recent lock file from nightly pipeline

Signed-off-by: TensorRT LLM <[email protected]>

[TRTLLM-9264][fix] Add accuracy/unit tests/doc for phi4mm (NVIDIA#9246)

Signed-off-by: Wanli Jiang <[email protected]>

[https://nvbugs/5580099][fix] Cherry pick IMA issue fix from release/1.1 (NVIDIA#9032)

Signed-off-by: Junyi Xu <[email protected]>

[None][chore] Upgrade CuteDSL to 4.3.0 (NVIDIA#9444)

Signed-off-by: Enwei Zhu <[email protected]>

[None][feat] Support MLA chunked prefill for DeepSeek V3.2 model (NVIDIA#9376)

Signed-off-by: Chang Liu (Enterprise Products) <[email protected]>

[None][feat] Add environment variable to force spec-dec number of accepted tokens (NVIDIA#9371)

Signed-off-by: Aurelien Chartier <[email protected]>

[None][infra] Update allowed list 2025.11.25 (NVIDIA#9468)

Signed-off-by: Yuanjing Xue <[email protected]>

[None][infra] Fail the pipeline when slurm ssh dropped (NVIDIA#9157)

Signed-off-by: Yuanjing Xue <[email protected]>

[None][feat] AutoDeploy: Remove redundant copies in mamba layers (NVIDIA#9461)

Signed-off-by: Chenghao Zhang <[email protected]>
Co-authored-by: Suyog Gupta <[email protected]>

[None][feat] AutoDeploy: Add A_log fusion for Mamba layers (NVIDIA#9422)

Signed-off-by: Chenghao Zhang <[email protected]>

[None][ci] Waive blackwell test on spec gate. (NVIDIA#9502)

Signed-off-by: Zheyu Fu <[email protected]>

[https://nvbugs/5608930][fix] Fix a typo (NVIDIA#9487)

Signed-off-by: Shixiaowei02 <[email protected]>

[NVIDIA#9463][feat] Add revision option to trtllm commands (NVIDIA#9498)

Signed-off-by: Aurelien Chartier <[email protected]>

[TRTLLM-9085][doc] fix math formula rendering issues (NVIDIA#9481)

Signed-off-by: junq <[email protected]>

[None][chore] update comments in llm_args.py (NVIDIA#9472)

Signed-off-by: junq <[email protected]>

[None][infra] Check in most recent lock file from nightly pipeline

Signed-off-by: TensorRT LLM <[email protected]>

[https://nvbugs/5680310][fix] Fix ctx only timed out test (NVIDIA#9410)

Signed-off-by: Patrice Castonguay <[email protected]>

[https://nvbugs/5547414][fix] enable case after using local cache model (NVIDIA#9473)

Signed-off-by: Hui Gao <[email protected]>

[None][fix] Replace PYTORCH_CUDA_ALLOC_CONF with PYTORCH_ALLOC_CONF to fix deprecation warning (NVIDIA#9294)

Signed-off-by: Jiagan Cheng <[email protected]>

[https://nvbugs/5698581][fix] Init draft tokens for CUDA graph dummy request (NVIDIA#9505)

Signed-off-by: ziyixiong-nv <[email protected]>

[None][infra] Waive failed case in pre-merge on 11/27 (NVIDIA#9507)

Signed-off-by: qqiao <[email protected]>

[TRTLLM-9513][docs] Qwen3 deployment guide (NVIDIA#9488)

Signed-off-by: Lanyu Liao <[email protected]>
Co-authored-by: Lanyu Liao <[email protected]>

[None][chore] revert batch_size=1 to prevent timeout and lower accuracy reference by 0.12% as a WAR (NVIDIA#9447)

Signed-off-by: Lizhi Zhou <[email protected]>
Co-authored-by: Shi Xiaowei <[email protected]>

[TRTLLM-9279][infra] Use flexcache for gh200 nodes since they locate in Austin (NVIDIA#9405)

Signed-off-by: qqiao <[email protected]>
Signed-off-by: Emma Qiao <[email protected]>
Co-authored-by: Yanchao Lu <[email protected]>

[cherry-pick][https://nvbugs/5670793][fix] Solve trtllm-serve launch_disaggregated issue (NVIDIA#9346)

Signed-off-by: xxi <[email protected]>

[None][infra] Fix Slurm job script (NVIDIA#9508)

Signed-off-by: Yuanjing Xue <[email protected]>

[None][fix] change allreduce workspace dtype to torch.int64 to avoid overflow (NVIDIA#9479)

Signed-off-by: Zhenhuan Chen <[email protected]>

[None][feat] add qwen3-next CI test of accuracy on BF16 and NVFP4 (NVIDIA#9330)

Signed-off-by: jiant <[email protected]>

[None][fix] fix TP support for DeepSeek-V3.2 on hopper (NVIDIA#9484)

Signed-off-by: Fanrong Li <[email protected]>

[TRTLLM-9389][chore] Refactor AlltoallMethodType. (NVIDIA#9388)

Signed-off-by: Bo Li <[email protected]>

[https://nvbugs/5674665][chore] Add test coverage for https://nvbugspro.nvidia.com/bug/5674665 (NVIDIA#9518)

Signed-off-by: eopXD <[email protected]>

[TRTLLM-7288][infra] Download merged waive list in slurm script (NVIDIA#8999)

Signed-off-by: Yiqing Yan <[email protected]>
Signed-off-by: Yanchao Lu <[email protected]>
Co-authored-by: Yanchao Lu <[email protected]>

[https://nvbugs/5687820][fix] Remove self.abort() in DetokenizedGenerationResult (NVIDIA#9449)

Signed-off-by: Enwei Zhu <[email protected]>

[NVIDIA#9150][feat] AutoDeploy Nemotron-Flash support (NVIDIA#9504)

Signed-off-by: Lucas Liebenwein <[email protected]>

[None] [chore] Update to cutlass 4.3 (NVIDIA#8637)

Signed-off-by: Kaiyu Xie <[email protected]>

[https://nvbugs/5637037][chore] Update waive lists. (NVIDIA#9386)

Signed-off-by: Bo Li <[email protected]>
Signed-off-by: Enwei Zhu <[email protected]>
Co-authored-by: Enwei Zhu <[email protected]>

[None][infra] Check in most recent lock file from nightly pipeline

Signed-off-by: TensorRT LLM <[email protected]>

[TRTLLM-8970][infra] Fix generate report when has isolation test result (NVIDIA#8861)

Signed-off-by: qqiao <[email protected]>
Signed-off-by: Emma Qiao <[email protected]>

[https://nvbugs/5685015][fix] Update invalid max_token test (NVIDIA#9435)

Signed-off-by: Junyi Xu <[email protected]>

[None][fix] Fix on-disk cache and revise logger/statistics for AutoTuner. (NVIDIA#9211)

Signed-off-by: Yukun He <[email protected]>

[https://nvbugs/5689658][test] Fix gpu lock issue running on cluster (NVIDIA#9441)

Signed-off-by: yufeiwu <[email protected]>

[None][chore] add spec_decoding configs in perf benchmark scripts and fix typos (NVIDIA#9533)

Signed-off-by: Lanyu Liao <[email protected]>
Co-authored-by: Lanyu Liao <[email protected]>

[None][fix] Remove FP8 K/V buffer from TRTLLM sparse MLA attention kernel (NVIDIA#9529)

Signed-off-by: Chang Liu (Enterprise Products) <[email protected]>

[None] [chore] Enhancements and clean up to slurm scripts (NVIDIA#9493)

Signed-off-by: Kaiyu Xie <[email protected]>

[None][chore] Revert "[None][fix] change allreduce workspace dtype to torch.int64 t… (NVIDIA#9538)

Signed-off-by: Zhenhuan Chen <[email protected]>

[None][infra] Waive failed cases for main branch on 11/28 (NVIDIA#9539)

Signed-off-by: qqiao <[email protected]>

[None][fix] Pass checkpoint_format to create_input_processor (NVIDIA#9521)

Signed-off-by: Robin Kobus <[email protected]>

[TRTLLM-9541][infra] Use artifactory mirror for download.pytorch.org (NVIDIA#9477)

Signed-off-by: ZhanruiSunCh <[email protected]>
Signed-off-by: Zhanrui Sun <[email protected]>
Co-authored-by: Yanchao Lu <[email protected]>

[TRTLLM-9488][feat] add 'disable_flashinfer_sampling' config option (NVIDIA#9454)

Signed-off-by: ixlmar <[email protected]>

[None][infra] Waive failed case in pre-merge on 11/28 (NVIDIA#9537)

Signed-off-by: Wangshanshan <[email protected]>

[None][perf] Helix: improve all-to-all perf for large CP size (NVIDIA#9494)

Signed-off-by: Matthias Jouanneaux <[email protected]>
Signed-off-by: Zheyu Fu <[email protected]>
Co-authored-by: Zheyu Fu <[email protected]>

[None][feat] support for more accurate AR calculation (NVIDIA#9323)

Signed-off-by: binghanc <[email protected]>

[TRTLLM-9488][fix] llmapi references (NVIDIA#9547)

Signed-off-by: ixlmar <[email protected]>

[NVIDIA#8948][feat] Support custom sharding config (NVIDIA#9143)

Signed-off-by: greg-kwasniewski1 <[email protected]>

[None][infra] Check in most recent lock file from nightly pipeline

Signed-off-by: TensorRT LLM <[email protected]>

[None][chore] Weekly mass integration of release/1.1 -- rebase (NVIDIA#9522)

Signed-off-by: yunruis <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>
Signed-off-by: Wangshanshan <[email protected]>
Signed-off-by: qgai <[email protected]>
Signed-off-by: Balaram Buddharaju <[email protected]>
Signed-off-by: Yan Chunwei <[email protected]>
Signed-off-by: Junyi Xu <[email protected]>
Signed-off-by: Simeng Liu <[email protected]>
Signed-off-by: nv-guomingz <[email protected]>
Signed-off-by: Jin Li <[email protected]>
Signed-off-by: Ivy Zhang <[email protected]>
Signed-off-by: Vincent Zhang <[email protected]>
Signed-off-by: peaceh <[email protected]>
Signed-off-by: Michal Guzek <[email protected]>
Signed-off-by: Michal Guzek <[email protected]>
Signed-off-by: Chang Liu (Enterprise Products) <[email protected]>
Signed-off-by: leslie-fang25 <[email protected]>
Signed-off-by: Shunkang <[email protected]>
Signed-off-by: junq <[email protected]>
Co-authored-by: yunruis <[email protected]>
Co-authored-by: sunnyqgg <[email protected]>
Co-authored-by: brb-nv <[email protected]>
Co-authored-by: Yan Chunwei <[email protected]>
Co-authored-by: JunyiXu-nv <[email protected]>
Co-authored-by: Simeng Liu <[email protected]>
Co-authored-by: Guoming Zhang <[email protected]>
Co-authored-by: Jin Li <[email protected]>
Co-authored-by: Ivy Zhang <[email protected]>
Co-authored-by: Vincent Zhang <[email protected]>
Co-authored-by: peaceh-nv <[email protected]>
Co-authored-by: Michal Guzek <[email protected]>
Co-authored-by: Chang Liu <[email protected]>
Co-authored-by: Leslie Fang <[email protected]>
Co-authored-by: Shunkangz <[email protected]>
Co-authored-by: Shunkang <[email protected]>
Co-authored-by: QI JUN <[email protected]>

[TRTLLM-5971][feat] Integrate helix parallelism (NVIDIA#9342)

Signed-off-by: Balaram Buddharaju <[email protected]>

[None][infra] Check in most recent lock file from nightly pipeline

Signed-off-by: TensorRT LLM <[email protected]>

[None][infra] - Request idle time exemption for OCI jobs (NVIDIA#9528)

Signed-off-by: Yanchao Lu <[email protected]>

[None][infra] Wiave failed tests for main branch on 11/30 (NVIDIA#9555)

Signed-off-by: qqiao <[email protected]>

[None][fix] Fix port conflict in disagg tests (NVIDIA#9474)

Signed-off-by: Junyi Xu <[email protected]>

[None][ci] Split H100_PCIe-PyTorch-Post-Merge test stage (NVIDIA#9558)

Signed-off-by: Yanchao Lu <[email protected]>

[None][ci] Split H100_PCIe-PyTorch-Post-Merge test stage (NVIDIA#9559)

Signed-off-by: Yanchao Lu <[email protected]>

[TRTLLM-8958][feat] and [TRTLLM-8960]: create ConfigurableMoE and support TRTLLMGenFusedMoE as backend (NVIDIA#9486)

[None] [feat] Optimize the algorithm part of RocketKV (NVIDIA#9333)

Signed-off-by: yuhangh <[email protected]>

[https://nvbugs/5690172][fix] Fix Qwen3-235B ATP accuracy issue with PDL (NVIDIA#9530)

Signed-off-by: Enwei Zhu <[email protected]>

[TRTLLM-6222][feat] Extend cute_dsl_nvfp4_gemm to sm103. (NVIDIA#9543)

Signed-off-by: Mindy Li <[email protected]>

[None][fix] Correct virtual memory allocation alignment (NVIDIA#9491)

Signed-off-by: Yuan Tong <[email protected]>

[None][infra] Check in most recent lock file from nightly pipeline

Signed-off-by: TensorRT LLM <[email protected]>

[https://nvbugs/5684703][fix] Unwaive disagg guided decoding test (NVIDIA#9466)

Signed-off-by: Enwei Zhu <[email protected]>

[https://nvbugs/5503479][fix] Temporarily lower reference accuracy to stabilize CI (NVIDIA#9398)

Signed-off-by: Pengbo Wang <[email protected]>

[None][chore] remove qwen3-next accuracy tests (NVIDIA#9534)

Signed-off-by: jiant <[email protected]>

[None][doc] fix mtp.py typo (NVIDIA#9307)

Signed-off-by: liugaoji <[email protected]>

[None][feat] add chat template kwargs support to longbench-v2 (NVIDIA#9544)

Signed-off-by: Fanrong Li <[email protected]>

[NVIDIA#9496][fix] AutoDeploy: remove auto-tuner from nvfp4_gemm forward (NVIDIA#9497)

Signed-off-by: Neta Zmora <[email protected]>

[None][fix] Replace hash method with unique_id for cutedsl MoE runners. (NVIDIA#9569)

Signed-off-by: Yukun He <[email protected]>

[None][chore] refactor disaggregated scripts to use named arguments (NVIDIA#9581)

Signed-off-by: Zhenhuan Chen <[email protected]>

[TRTLLM-6222][feat] Several perf opt for cuteDSL nvf4 gemm (NVIDIA#9428)

Signed-off-by: Yuhan Li <[email protected]>

[None][chore] reduce the layers of the `devel` docker image (NVIDIA#9077)

Signed-off-by: Martin Marciniszyn Mehringer <[email protected]>

[https://nvbugs/5651854][infra] Enable perf metrics during accuracy testing (NVIDIA#9140)

[None][fix] Skip Allreduce init for Attention DP (NVIDIA#9542)

Signed-off-by: Enwei Zhu <[email protected]>

[None][test] [None][test] Waive main branch test failures 12/1 (NVIDIA#9566)

Signed-off-by: Yanchao Lu <[email protected]>

[None][ci] Minor change for Slurm scripts (NVIDIA#9561)

Signed-off-by: Yanchao Lu <[email protected]>

[TRTLLM-6768][infra] Fix params for not updating github status (NVIDIA#6747)

Signed-off-by: Yiqing Yan <[email protected]>

[None][infra] Update the pytest options after MI (NVIDIA#9579)

Signed-off-by: qqiao <[email protected]>

[TRTLLM-6756][feat] Add Beam Search to TorchSampler (NVIDIA#8509)

Signed-off-by: Stefan Niebler <[email protected]>

[None][chore] Defer exposing context parallel configs (NVIDIA#9552)

Signed-off-by: Balaram Buddharaju <[email protected]>

[TRTC-1943][feat] Env vars override support in LLM API (NVIDIA#9104)

Signed-off-by: Venky Ganesh <[email protected]>

[None][feat] AutoDeploy: Use the router gemm op for nemotron MOE (NVIDIA#9500)

Signed-off-by: Chenghao Zhang <[email protected]>

[NVIDIA#9198][feat] Refactor dist ops in AutoDeploy (NVIDIA#9301)

Signed-off-by: Eran Geva <[email protected]>

[None][fix] Prevent YAML partial kv_cache_config from incorrectly overriding the complete kv_cache_config (NVIDIA#9262)

Signed-off-by: Yuening Li <[email protected]>

[TRTLLM-9085][doc] fix math formula rendering issues in github (NVIDIA#9605)

Signed-off-by: junq <[email protected]>

[None][feat] Unify nvfp4 gemm backend (NVIDIA#8963)

Signed-off-by: Shijie Wang <[email protected]>
Signed-off-by: Yukun He <[email protected]>
Signed-off-by: Shijie <[email protected]>
Co-authored-by: Yukun He <[email protected]>

[None][feat] Add support for KVCache reuse for DSv32 (NVIDIA#9383)

Signed-off-by: Iman Tabrizian <[email protected]>

[None][infra] Check in most recent lock file from nightly pipeline

Signed-off-by: TensorRT LLM <[email protected]>

[None][chroe] Polish qwen3-next modeling code. (NVIDIA#8902)

Signed-off-by: nv-guomingz <[email protected]>

[https://nvbugs/5703953][fix] Use random port for disagg tests (NVIDIA#9582)

Signed-off-by: Junyi Xu <[email protected]>

[None][fix] Waive gb200 (NVIDIA#9580)

Signed-off-by: Xin He (SW-GPU) <[email protected]>

[FMDL-1328][feat] Add support for nano-v3 and super-v3 with pytorch backend (NVIDIA#9261)

Signed-off-by: Wanli Jiang <[email protected]>

[https://nvbugs/5582091][test] increase warmup times in testing for multi-gpu cases (NVIDIA#9578)

Signed-off-by: Ruodi Lu <[email protected]>
Co-authored-by: Ruodi Lu <[email protected]>

[None][chore] Add failed cases into waives.txt (NVIDIA#9588)

Signed-off-by: xinhe-nv <[email protected]>

[https://nvbugs/5702793][fix] Fix uncontiguous tensor view (NVIDIA#9576)

Signed-off-by: shuyix <[email protected]>

[None][infra] Waive failed cases for main branch (NVIDIA#9615)

Signed-off-by: qqiao <[email protected]>

[TRTLLM-9488][feat] use FlashInfer.sampling by default (NVIDIA#9545)

Signed-off-by: ixlmar <[email protected]>

[None][infra] Update allowlist 2025/12/01 (NVIDIA#9616)

Signed-off-by: Yuanjing Xue <[email protected]>

[None][infra] Remove an invalid test name in waives.txt (NVIDIA#9620)

Signed-off-by: qqiao <[email protected]>

Lock the gpu clocks in L0 perf tests (NVIDIA#9585)

Signed-off-by: Eran Geva <[email protected]>

[TRTLLM-9466][test] Evaluate helix parallelism with DSV3 Lite (NVIDIA#9597)

Signed-off-by: Balaram Buddharaju <[email protected]>

[None][fix] Extract GPU count from single-node stage names (NVIDIA#9599)

Signed-off-by: Chang Liu (Enterprise Products) <[email protected]>

[https://nvbugs/5667774][fix] Refine Piecewise Cuda Graph Condition for DP (NVIDIA#9393)

Signed-off-by: Jin Li <[email protected]>

[TRTLLM-9144][fix] enhance RPC robustness (NVIDIA#8711)

Signed-off-by: Superjomn <[email protected]>
Signed-off-by: Erin Ho <[email protected]>
Signed-off-by: Yan Chunwei <[email protected]>
Co-authored-by: Erin Ho <[email protected]>

[https://nvbugs/5627710][fix] Fix synchronization bugs in KvCacheTransferManager that can cause corrupted blocks (NVIDIA#9056)

Signed-off-by: thorjohnsen <[email protected]>
Signed-off-by: Thor Johnsen <[email protected]>
Co-authored-by: Iman Tabrizian <[email protected]>
Co-authored-by: Robin Kobus <[email protected]>

[TRTLLM-8980][test] Clean up spec dec tests in test_llm_api_pytorch (NVIDIA#8889)

Signed-off-by: Mike Iovine <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>

[NVIDIA#9150][feat] Add code for nano v3 to custom implementation in AD (NVIDIA#9465)

* Why?

We would like to show an alternative to monkey-patching in AutoDeploy.

* What?

This commit builds on the existing custom model implementation for
NemotronH and adds the bits relevant for MoE layers.

Part of NVIDIA#9150.

Signed-off-by: William Zhang <[email protected]>

[NVIDIA#9150][feat] AutoDeploy: reviewer comments for NVIDIA#9150 (NVIDIA#9527)

Signed-off-by: Lucas Liebenwein <[email protected]>

[https://nvbugs/5651854][fix] Fix dist-serving perf by clearing CPU affinity (NVIDIA#9549)

Signed-off-by: Shixiaowei02 <[email protected]>

[NVIDIA#9550][feat] AutoDeploy: Add NVFP4 Cutlass MoE kernels  (NVIDIA#9551)

Signed-off-by: Neta Zmora <[email protected]>

[https://nvbugs/5688388][fix] fix: Reducing num request in disagg test to speed up (NVIDIA#9598)

Signed-off-by: Patrice Castonguay <[email protected]>

[TRTLLM-8946][feat] Improved heuristics to detect shardable regions (NVIDIA#9200)

Signed-off-by: Lucas Liebenwein <[email protected]>
Signed-off-by: greg-kwasniewski1 <[email protected]>
Co-authored-by: Lucas Liebenwein <[email protected]>

[NVIDIA#9632][feat] Support EXTRA_WHEEL_BUILD_ARGS during wheel build (NVIDIA#9633)

Signed-off-by: Yu Chi Li <[email protected]>

[None][chore] Waive test failing on pre-merge (NVIDIA#9638)

Signed-off-by: Balaram Buddharaju <[email protected]>

[None][chore] Remove traceback dump for multimodal input processor (NVIDIA#9634)

Signed-off-by: Chang Liu (Enterprise Products) <[email protected]>

[None][chore] Fix trtllm-eval and move GroupedGemmInputsHelper (NVIDIA#9612)

Signed-off-by: Enwei Zhu <[email protected]>

[https://nvbugs/5698434][fix] Use separate weight mapper for draft (NVIDIA#9607)

Signed-off-by: Anurag Mukkara <[email protected]>

[TRTLLM-7101][infra] Reuse passed tests (NVIDIA#6894)

Signed-off-by: Yiqing Yan <[email protected]>
Co-authored-by: Yanchao Lu <[email protected]>

[None][test] Remove duplicate test cases (NVIDIA#9623)

Signed-off-by: yufeiwu <[email protected]>

[None][infra] Check in most recent lock file from nightly pipeline

Signed-off-by: TensorRT LLM <[email protected]>

[None][feat] Add RocketKV usage doc and e2e accuracy test on LongBenchV2 (NVIDIA#9572)

Signed-off-by: yuhangh <[email protected]>

[TRTLLM-9242][doc] Add examples showcasing openai compatible APIs (NVIDIA#9520)

Signed-off-by: Junyi Xu <[email protected]>

[None][chore] AutoDeploy update cuda stream manager for multi-device (NVIDIA#9575)

Signed-off-by: Suyog Gupta <[email protected]>

[TRTLLM-9391][chore] Automatically estimate required workspace. (NVIDIA#9535)

Signed-off-by: Bo Li <[email protected]>

[https://nvbugs/5708475][fix] Fix e2e eval accuracy for helix parallelism (NVIDIA#9647)

Signed-off-by: Balaram Buddharaju <[email protected]>

[https://nvbugs/5561153][test] Fix log error for perf test (NVIDIA#9622)

Signed-off-by: FredricZ-2007 <[email protected]>

[TRTLLM-8241][feat] Aliasing to comply to LlmArgs (NVIDIA#9586)

Signed-off-by: Pengyun Lin <[email protected]>

[None][chore] Add failed cases into waives.txt (NVIDIA#9593)

Signed-off-by: Jie Li <[email protected]>
Co-authored-by: Jie Li <[email protected]>

[TRTLLM-6842][feat] Support Response API for general purpose (NVIDIA#9392)

Signed-off-by: Junyi Xu <[email protected]>

[None][test] Update Qwen3-next accuracy testing by setting the cuda … (NVIDIA#9613)

Signed-off-by: nv-guomingz <[email protected]>

[None][feat] update trtllm-gen nvfp4 kernels with better performance (NVIDIA#9510)

Signed-off-by: Perkz Zheng <[email protected]>

[None][doc] Replace the tensorrt icon with torch icon on overview.md (NVIDIA#9644)

Signed-off-by: nv-guomingz <[email protected]>

[https://nvbugs/5705197][chore] Unwaive timeout disagg tests (NVIDIA#9637)

Signed-off-by: Patrice Castonguay <[email protected]>

[https://nvbugs/5552132][fix] Enable LoRa for GPT OSS Torch (NVIDIA#8253)

Signed-off-by: Michal Guzek <[email protected]>

[None][fix] Fix wide ep MoE error (NVIDIA#9642)

Signed-off-by: Iman Tabrizian <[email protected]>

[https://nvbugs/5702795][fix] Remove the warning message for aten.log. (NVIDIA#9665)

Signed-off-by: nv-guomingz <[email protected]>

[https://nvbugs/5693853][fix] Fix error handling when querying machin… (NVIDIA#9483)

Signed-off-by: Gal Hubara Agam <[email protected]>

[OMNIML-2932] [feat] nvfp4 awq support (NVIDIA#8698)

Signed-off-by: weimingc <[email protected]>

[NVIDIA#9643][fix] AutoDeploy: fix nano sharding config (NVIDIA#9668)

Signed-off-by: Lucas Liebenwein <[email protected]>

[NVIDIA#9147][feat] AutoDeploy: Draft Target Speculative Decoding (NVIDIA#9275)

Signed-off-by: Govind Ramnarayan <[email protected]>

[None][feat] Update Qwen3CodeToolParser to align tool-calling parameters (NVIDIA#9540)

Signed-off-by: Wanli Jiang <[email protected]>

[TRTLLM-7181][infra] Generate test results when pytest timeout happens (NVIDIA#9396)

Signed-off-by: Yiqing Yan <[email protected]>

[None][infra] Check in most recent lock file from nightly pipeline

Signed-off-by: TensorRT LLM <[email protected]>

[TRTLLM-9522][fix] restore `trtllm-serve mm_embedding_serve` (NVIDIA#9669)

[TRTLLM-5093][infra] Write env variables to a file in the interactive debug session (NVIDIA#6792)

Signed-off-by: Yiqing Yan <[email protected]>

[None][fix] fix error when processing batches containing both text and mm data (NVIDIA#8381)

Signed-off-by: Nekofish-L <[email protected]>

[TRTLLM-7073][feat] Support torch compile for PP for Llama and DeepSeekV3 (NVIDIA#7838)

Signed-off-by: Jin Li <[email protected]>

[None][feat] Add weights initialization and context phase parser to layer-wise benchmarks (NVIDIA#9667)

Signed-off-by: Tailing Yuan <[email protected]>

[TRTLLM-8274][feat] Check if executor is shutdown in /health entrypoint (NVIDIA#9057)

Signed-off-by: Junyi Xu <[email protected]>

[NVIDIA#8733][feat] Add Llama4 MoE handling to AutoDeploy (NVIDIA#9556)

Signed-off-by: Tal Cherckez <[email protected]>
Signed-off-by: tcherckez-nvidia <[email protected]>
Co-authored-by: Neta Zmora <[email protected]>

[None][ci] unwaive tests (NVIDIA#9651)

Signed-off-by: Yan Chunwei <[email protected]>

[None][feat] Add NIXL-LIBFABRIC support (NVIDIA#9225)

Signed-off-by: Yoray Zack <[email protected]>
Signed-off-by: zackyoray <[email protected]>

[None][test] rename wide ep and disagg metric name in perf test (NVIDIA#9704)

Signed-off-by: Ruodi Lu <[email protected]>
Co-authored-by: Ruodi Lu <[email protected]>

[https://nvbugs/5467531][fix] Unwaive fused_moe all to all test with … (NVIDIA#9617)

Signed-off-by: Jin Li <[email protected]>

[None][fix] Recover TRTLLM MoE Perf for DEP (NVIDIA#9562)

Signed-off-by: Anthony Chang <[email protected]>

[None][chore] Add failed cases into waives.txt (NVIDIA#9662)

Signed-off-by: Xin He (SW-GPU) <[email protected]>
Signed-off-by: xinhe-nv <[email protected]>
Signed-off-by: Yanchao Lu <[email protected]>
Co-authored-by: Yanchao Lu <[email protected]>

[None][fix] Fix TLLM_SPEC_DECODE_FORCE_NUM_ACCEPTED_TOKENS for MTP/EAGLE (NVIDIA#9608)

Signed-off-by: Aurelien Chartier <[email protected]>

[None][infra] Add container notices and documentation (NVIDIA#9185)

Signed-off-by: Parker Drake <[email protected]>

[TRTLLM-5312][infra] Add triton trigger rules (NVIDIA#6440)

Signed-off-by: Yiqing Yan <[email protected]>

[None][doc] Add feature docs for helix parallelism (NVIDIA#9684)

Signed-off-by: Balaram Buddharaju <[email protected]>

[TRTLLM-9579][infra] Set mergeWaiveList stage UNSTABLE when there is any issue (NVIDIA#9692)

Signed-off-by: Yiqing Yan <[email protected]>

[None][doc] Added line about partial reuse (NVIDIA#7846)

Signed-off-by: thorjohnsen <[email protected]>

[TRTLLM-8920][feat] decouple disagg service from fastapi (NVIDIA#8714)

Signed-off-by: Lizhi Zhou <[email protected]>

[https://nvbugs/5633340][fix] start disagg workers and servers on free ports (NVIDIA#9694)

Signed-off-by: Lizhi Zhou <[email protected]>

[TRTLLM-9562] [doc] Add Deployment Guide for Kimi K2 Thinking on TensorRT LLM - Blackwell (NVIDIA#9711)

Signed-off-by: Kaiyu Xie <[email protected]>

[NVIDIA#9602][feat] AutoDeploy: Support TRTLLM Sampler (NVIDIA#9641)

Signed-off-by: Govind Ramnarayan <[email protected]>

[None][infra] Check in most recent lock file from nightly pipeline

Signed-off-by: TensorRT LLM <[email protected]>

[None] [tests] Unwaive EPLB tests (NVIDIA#9625)

Signed-off-by: Kaiyu Xie <[email protected]>

[https://nvbugs/5518713][test] Refactor core test lists by merging with llm_perf_cluster.yml (NVIDIA#9714)

Signed-off-by: yufeiwu <[email protected]>

[TRTLLM-7136][feat] Update load_weights method to include mapping parameter in checkpoint loaders (NVIDIA#9583)

Signed-off-by: Robin Kobus <[email protected]>

[None][refactor] Improve request processing function in sampler (NVIDIA#9671)

Signed-off-by: Robin Kobus <[email protected]>

[https://nvbugs/5670672][fix] Fix flaky KV connector tests (NVIDIA#9676)

Signed-off-by: jthomson04 <[email protected]>

[None][infra] Update allowed list 20251204 (NVIDIA#9718)

Signed-off-by: Yuanjing Xue <[email protected]>

[None][feat] AutoDeploy: Perf optimization for Attention and rmsnorm (NVIDIA#9719)

Signed-off-by: Chenghao Zhang <[email protected]>

[None][chore] Waive flakey disagg tests (NVIDIA#9749)

Signed-off-by: Mike Iovine <[email protected]>

[https://nvbugs/5601682][fix] Fix cacheTransceiver hang (NVIDIA#9311)

Signed-off-by: Iman Tabrizian <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>

[TRTLLM-9199][docs] KV Connector Docs (NVIDIA#9325)

Signed-off-by: jthomson04 <[email protected]>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Signed-off-by: Mike Iovine <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>

[TRTLLM-9160][doc] add doc to llm_runtime.py (NVIDIA#9482)

Signed-off-by: Yan Chunwei <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>

[None][doc] VDR 1.0 trtllm-serve doc enhancement (NVIDIA#9443)

Signed-off-by: Pengyun Lin <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>

[TRTLLM-9086][doc] Clean up TODOs in documentation (NVIDIA#9292)

Signed-off-by: junq <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>

[TRTLLM-9157][doc] Guided decoding doc improvement (NVIDIA#9359)

Signed-off-by: Enwei Zhu <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>

[None][infra] Updated Linux installation guide (NVIDIA#9485)

Signed-off-by: Yiqing Yan <[email protected]>
Co-authored-by: Yanchao Lu <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>

[TRTLLM-9075][doc] refine the slurm examples (NVIDIA#9548)

Signed-off-by: Yan Chunwei <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>

[TRTLLM-9093][doc] update hyper links in overview (NVIDIA#9568)

Signed-off-by: junq <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>

[TRTLLM-9092][doc] link to modelopt checkpoints in quick start guide (NVIDIA#9571)

Signed-off-by: junq <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>

[None][infra] Check in most recent lock file from nightly pipeline

Signed-off-by: TensorRT LLM <[email protected]>

[None][fix] Fix triton moe load_weight (NVIDIA#9649)

Signed-off-by: shuyix <[email protected]>

[None][fix] fix a bug: deepseek_fp8_block_scales in TRTLLMGEN-MoE use 2D x_sf instead of 1D (NVIDIA#9658)

Signed-off-by: xxi <[email protected]>

[TRTLLM-9372][feat] Enable CuteDSL MoE with Large EP (NVIDIA#9592)

Signed-off-by: Enwei Zhu <[email protected]>

[TRTLLM-9522][chore] implement default `attach_multimodal_embeddings` (NVIDIA#9664)

Signed-off-by: ixlmar <[email protected]>

[TRTLLM-9660][feat] Convert cuteDSL GEMM to opt-in feature (NVIDIA#9682)

Signed-off-by: Jonas Li <[email protected]>
Co-authored-by: Kaiyu Xie <[email protected]>

[None][fix] enable hmac in RPC (NVIDIA#9745)

Signed-off-by: Superjomn <[email protected]>

[None][infra] Check in most recent lock file from nightly pipeline

Signed-off-by: TensorRT LLM <[email protected]>

[https://nvbugs/5703953][fix] Preserving ip:port for trtllm-serve before initializing llm (NVIDIA#9646)

Signed-off-by: Junyi Xu <[email protected]>

[None][infra] Waive failed cases for main branch on 12/07 (NVIDIA#9769)

Signed-off-by: qqiao <[email protected]>

[None][fix] Several minor fixes to CI setting (NVIDIA#9765)

Signed-off-by: Yanchao Lu <[email protected]>

[OMNIML-3036][doc] Re-branding TensorRT-Model-Optimizer as Nvidia Model-Optimizer (NVIDIA#9679)

Signed-off-by: Chenjie Luo <[email protected]>

[None][feat] Enable NCCL_SYMMETRIC as default fallback for AllReduce (NVIDIA#9314)

Signed-off-by: Ludwig Schneider <[email protected]>

[TRTLLM-9000][feat] Add multi-node Perf Tests into CI (NVIDIA#8800)

Signed-off-by: Chenfei Zhang <[email protected]>

[None][test] add ntp tolerance in time metrics verification (NVIDIA#9741)

Signed-off-by: zhengd-nv <[email protected]>

[TRTLLM-9603][feat] Enable ConfigurableMoE test in the CI (NVIDIA#9645)

[https://nvbugs/5422621][test] Add GB 200 WIDEEP test case for RCCA 5422621 (NVIDIA#9506)

Signed-off-by: FredricZ-2007 <[email protected]>

[None][fix] Fix two tuning cache miss issues. (NVIDIA#9743)

Signed-off-by: Yukun He <[email protected]>

[None][infra] Check in most recent lock file from nightly pipeline

Signed-off-by: TensorRT LLM <[email protected]>

[TRTLLM-9706] [doc] Update wide EP documents (NVIDIA#9724)

Signed-off-by: Kaiyu Xie <[email protected]>

[https://nvbugs/5666804][test] only adding sampler config for limited models (NVIDIA#9512)

Signed-off-by: Ruodi Lu <[email protected]>
Co-authored-by: Ruodi Lu <[email protected]>
Co-authored-by: yufeiwu-nv <[email protected]>
Co-authored-by: Larry Xu <[email protected]>

[None][infra] Waive failed cases for main on 12/08 (NVIDIA#9773)

Signed-off-by: qqiao <[email protected]>

[None][chore] Move the rocketkv e2e test to post-merge (NVIDIA#9768)

Signed-off-by: Fanrong Li <[email protected]>

[None][chore] Enable tvm_ffi for cute dsl nvfp4_gemm to reduce host overhead. (NVIDIA#9690)

Signed-off-by: Mindy Li <[email protected]>

[TRTLLM-9431][perf] Enable multistream for Linear Attention in Qwen3-… (NVIDIA#9696)

Signed-off-by: nv-guomingz <[email protected]>

[None][chore] Remove closed bugs (NVIDIA#9770)

Signed-off-by: xinhe-nv <[email protected]>

[None][infra] update mooncake in docker images (NVIDIA#9584)

Signed-off-by: zhengd-nv <[email protected]>
Signed-off-by: Zheng Duan <[email protected]>

[None][test] Add Kimi k2 WIDEEP perf and accuracy cases (NVIDIA#9686)

Signed-off-by: FredricZ-2007 <[email protected]>
Signed-off-by: Kaiyu Xie <[email protected]>
Co-authored-by: Kaiyu Xie <[email protected]>

[https://nvbugs/5527655][test] Add test case for RCCA 5527655 (NVIDIA#9511)

Signed-off-by: FredricZ-2007 <[email protected]>

[http://nvbugs/5649010][fix] fix test_auto_scaling.py::test_worker_restart timeout (NVIDIA#9775)

Signed-off-by: Lizhi Zhou <[email protected]>

[None][fix] Switch AutoDeploy's default allreduce strategy to NCCL (NVIDIA#9666)

Signed-off-by: Eran Geva <[email protected]>

[TRTLLM-9506][fix] Fix AR for DeepSeek-R1 2 model path (NVIDIA#9661)

Signed-off-by: qgai <[email protected]>

ray + updatew works

trtllm works in async env

trtllm works in sync and async env

ray + updatew works

rebase to the updated verl

server mode

still cherry pick

still cherry pick

still cherry pick

integrated http interface

hang at RyExecutor create workers ray.remote

clean code

use tensorrt_llm.rlhf_utils

Signed-off-by: Liwei Ma <[email protected]>

placement, asyncllm, and basic tests
Signed-off-by: Erin Ho <[email protected]>

connect sleep and wakeup; Add support to pass None to update_weights
Signed-off-by: Erin Ho <[email protected]>

Batching ctx for IFB scheduler

Signed-off-by: Yuan Tong <[email protected]>

accuracy WAR for TP>1: always use AllReduceStrategy.NCCL, refactored
Signed-off-by: Erin Ho <[email protected]>

fix e2e integration

Signed-off-by: Superjomn <[email protected]>

update asyncllm, other nits
Signed-off-by: Erin Ho <[email protected]>

fix init setup

Signed-off-by: Erin Ho <[email protected]>

Fix TRTLLMSampler logprobs perf

Signed-off-by: Yuan Tong <[email protected]>

fix and cleanup
Signed-off-by: Erin Ho <[email protected]>

fix server

Signed-off-by: Erin Ho <[email protected]>

Revert "Batching ctx for IFB scheduler"

This reverts commit b51aac0

Signed-off-by: Yuan Tong <[email protected]>

update & address comments

Signed-off-by: Erin Ho <[email protected]>
codego7250 pushed a commit to codego7250/TensorRT-LLM that referenced this pull request Dec 11, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants