Skip to content

Conversation

hyukn
Copy link
Collaborator

@hyukn hyukn commented Sep 24, 2025

When the scale is not provided, the fusion pattern needs to be reset to RESIDUAL_RMS_NORM.

Summary by CodeRabbit

  • Bug Fixes

    • Improved stability for FP8/NVFP4 execution by safely handling missing scaling metadata, preventing crashes in certain attention/MLP fusion paths and mixed parallelism setups.
  • Tests

    • Added FP8 test coverage for TP=2, PP=2 on the 70B Instruct model.
    • Included the new test in the 4-GPU pre-merge suite to ensure consistent validation.
  • Chores

    • Updated test matrix configuration to incorporate the new FP8 scenario.

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@hyukn hyukn requested a review from Superjomn September 24, 2025 10:57
@hyukn hyukn requested review from a team as code owners September 24, 2025 10:57
@hyukn
Copy link
Collaborator Author

hyukn commented Sep 24, 2025

/bot run --disable-fail-fast

Copy link
Contributor

coderabbitai bot commented Sep 24, 2025

📝 Walkthrough

Walkthrough

Adds guarded checks around FP8/NVFP4 scaling and fusion paths in Llama model forward passes to avoid missing input_scale attributes; mirrors logic for post-MLP paths. Introduces a new FP8 TP2/PP2 integration test and registers it in the DGX B200 L0 test list. One typo (elf vs self) added.

Changes

Cohort / File(s) Summary of changes
LLAMA model guards
tensorrt_llm/_torch/models/modeling_llama.py, tensorrt_llm/_torch/models/modeling_llama_min_latency.py
Add guarded checks for next_attn existence, quant modes (FP8/NVFP4), and hasattr(..., 'input_scale') before using scaling; maintain RESIDUAL_RMS_NORM when unavailable; mirror logic for post-MLP; min_latency path adds guards on FP8/NVFP4 all-reduce branches; note: one condition uses elf instead of self.
Integration test addition
tests/integration/defs/accuracy/test_llm_api_pytorch.py
Add test_fp8_tp2pp2 to TestLlama3_3_70BInstruct for FP8 with tensor_parallel_size=2 and pipeline_parallel_size=2, with same skips/preconditions as related tests.
Test list update
tests/integration/test_lists/test-db/l0_dgx_b200.yml
Register new test case accuracy/test_llm_api_pytorch.py::TestLlama3_3_70BInstruct::test_fp8_tp2pp2 under 4-GPU (b200) block.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant M as LlamaModel
  participant A as next_attn
  participant Q as qkv_proj
  participant F as FusionOp

  Note over M: Forward pass (attention/MLP post-processing)
  M->>A: Check existence of next_attn
  alt next_attn exists
    M->>Q: Check quant mode in {FP8, NVFP4} and hasattr(Q, 'input_scale')
    alt scale available
      M->>Q: Read input_scale
      M->>F: Use scaled post-processing path
    else no scale
      M->>F: Use RESIDUAL_RMS_NORM path
    end
  else no next_attn
    M->>F: Use RESIDUAL_RMS_NORM path
  end
Loading
sequenceDiagram
  autonumber
  participant ML as LlamaModel (min_latency)
  participant AR as AllReduce
  participant A as next_attn
  participant Q as qkv_proj

  Note over ML: Post-attention all-reduce (FP8/NVFP4)
  ML->>AR: AllReduce activations
  ML->>A: Verify next_attn present
  ML->>Q: Check mode {FP8, NVFP4} and hasattr(Q, 'input_scale')
  alt input_scale present
    ML->>Q: Apply scaling using input_scale
  else missing input_scale
    ML->>ML: Skip scaling branch
  end
  Note over ML: One condition contains a typo (`elf` vs `self`)
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Pre-merge checks and finishing touches

❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
Description Check ⚠️ Warning The pull request description only contains a one-line summary and the untouched template placeholders without a proper titled header, a filled “Description” section, or a “Test Coverage” list, so it does not adhere to the required template structure. Please replace the placeholder template with a correct PR title in the format [JIRA ticket/NVBugs ID/GitHub issue/None][type] Summary, provide a concise explanation of the issue and solution under “Description,” and list the specific tests that cover the new or modified code under “Test Coverage.”
✅ Passed checks (1 passed)
Check name Status Explanation
Title Check ✅ Passed The title follows the repository title template (NVBugs link + [fix]) and concisely describes the primary change: preventing illegal access when scale is missing in Llama3/4. It accurately reflects the code changes (guarding use of input_scale and resetting fusion to RESIDUAL_RMS_NORM) and is clear for reviewers.
✨ Finishing touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Tip

👮 Agentic pre-merge checks are now available in preview!

Pro plan users can now enable pre-merge checks in their settings to enforce checklists before merging PRs.

  • Built-in checks – Quickly apply ready-made checks to enforce title conventions, require pull request descriptions that follow templates, validate linked issues for compliance, and more.
  • Custom agentic checks – Define your own rules using CodeRabbit’s advanced agentic capabilities to enforce organization-specific policies and workflows. For example, you can instruct CodeRabbit’s agent to verify that API documentation is updated whenever API schema files are modified in a PR. Note: Upto 5 custom checks are currently allowed during the preview period. Pricing for this feature will be announced in a few weeks.

Please see the documentation for more information.

Example:

reviews:
  pre_merge_checks:
    custom_checks:
      - name: "Undocumented Breaking Changes"
        mode: "warning"
        instructions: |
          Pass/fail criteria: All breaking changes to public APIs, CLI flags, environment variables, configuration keys, database schemas, or HTTP/GraphQL endpoints must be documented in the "Breaking Change" section of the PR description and in CHANGELOG.md. Exclude purely internal or private changes (e.g., code not exported from package entry points or explicitly marked as internal).

Please share your feedback with us on this Discord post.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
tensorrt_llm/_torch/models/modeling_llama_min_latency.py (1)

821-834: Typo bug: elf is undefined; use self

This will raise NameError and break the FP8 post-allreduce path.

Apply this diff:

-            if use_fp8_allreduce and self.next_attn is not None \
-                and hasattr(elf.next_attn.qkv_proj, 'input_scale'):
+            if use_fp8_allreduce and self.next_attn is not None \
+                and hasattr(self.next_attn.qkv_proj, 'input_scale'):
🧹 Nitpick comments (2)
tests/integration/defs/accuracy/test_llm_api_pytorch.py (2)

646-669: Align skip marker with existing FP8 test (run on Hopper+)

The TP4 FP8 test for this model uses skip_pre_hopper, but this new TP2PP2 FP8 test uses skip_pre_blackwell. Unless there’s a known Hopper limitation for TP2PP2 FP8 here, align with the TP4 test to keep coverage consistent across Hopper and Blackwell.

Apply this diff:

-    @pytest.mark.skip_less_device(4)
-    @skip_pre_blackwell
+    @pytest.mark.skip_less_device(4)
+    @skip_pre_hopper
     def test_fp8_tp2pp2(self):

646-669: Optional: match max_seq_len to TP4 FP8 test for consistency

The TP4 FP8 test sets max_seq_len=8192; mirroring it here helps keep resource profiles comparable across configs.

Example change (optional):

         with LLM(model_path,
                  tensor_parallel_size=2,
                  pipeline_parallel_size=2,
+                 max_seq_len=8192,
                  max_batch_size=32,
                  kv_cache_config=kv_cache_config) as llm:
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between cfbcf9b and 627df3f.

📒 Files selected for processing (4)
  • tensorrt_llm/_torch/models/modeling_llama.py (2 hunks)
  • tensorrt_llm/_torch/models/modeling_llama_min_latency.py (2 hunks)
  • tests/integration/defs/accuracy/test_llm_api_pytorch.py (1 hunks)
  • tests/integration/test_lists/test-db/l0_dgx_b200.yml (1 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{h,hpp,hh,hxx,cpp,cxx,cc,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Use only spaces, no tabs; indent with 4 spaces.

Files:

  • tests/integration/defs/accuracy/test_llm_api_pytorch.py
  • tensorrt_llm/_torch/models/modeling_llama_min_latency.py
  • tensorrt_llm/_torch/models/modeling_llama.py
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Python code must target Python 3.8+.
Indent Python code with 4 spaces; do not use tabs.
Maintain module namespace when importing; prefer 'from package.subpackage import foo' then 'foo.SomeClass()' instead of importing the class directly.
Python filenames should be snake_case (e.g., some_file.py).
Python classes use PascalCase names.
Functions and methods use snake_case names.
Local variables use snake_case; prefix 'k' for variables that start with a number (e.g., k_99th_percentile).
Global variables use upper SNAKE_CASE prefixed with 'G' (e.g., G_MY_GLOBAL).
Constants use upper SNAKE_CASE (e.g., MY_CONSTANT).
Avoid shadowing variables from an outer scope.
Initialize all externally visible members of a class in the constructor.
Prefer docstrings for interfaces that may be used outside a file; comments for in-function or file-local interfaces.
Use Google-style docstrings for classes and functions (Sphinx-parsable).
Document attributes and variables inline so they render under the class/function docstring.
Avoid reflection when a simpler, explicit approach suffices (e.g., avoid dict(**locals()) patterns).
In try/except, catch the most specific exceptions possible.
For duck-typing try/except, keep the try body minimal and use else for the main logic.

Files:

  • tests/integration/defs/accuracy/test_llm_api_pytorch.py
  • tensorrt_llm/_torch/models/modeling_llama_min_latency.py
  • tensorrt_llm/_torch/models/modeling_llama.py
**/*.{cpp,cxx,cc,h,hpp,hh,hxx,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Prepend the NVIDIA Apache-2.0 copyright header with current year to the top of all source files (e.g., .cpp, .h, .cu, .py).

Files:

  • tests/integration/defs/accuracy/test_llm_api_pytorch.py
  • tensorrt_llm/_torch/models/modeling_llama_min_latency.py
  • tensorrt_llm/_torch/models/modeling_llama.py
🧠 Learnings (4)
📚 Learning: 2025-07-28T17:06:08.621Z
Learnt from: moraxu
PR: NVIDIA/TensorRT-LLM#6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.

Applied to files:

  • tests/integration/defs/accuracy/test_llm_api_pytorch.py
  • tests/integration/test_lists/test-db/l0_dgx_b200.yml
📚 Learning: 2025-08-14T06:36:40.701Z
Learnt from: timlee0212
PR: NVIDIA/TensorRT-LLM#6886
File: tensorrt_llm/_torch/models/modeling_deepseekv3.py:0-0
Timestamp: 2025-08-14T06:36:40.701Z
Learning: In DeepSeek V3 model (tensorrt_llm/_torch/models/modeling_deepseekv3.py), the disagreement between AllReduce.__init__ guard and _compute_mlp_tp_size logic for MNNVL usage is expected by design. The AllReduce component and MLP TP-size computation intentionally use different criteria for MNNVL availability decisions.

Applied to files:

  • tensorrt_llm/_torch/models/modeling_llama_min_latency.py
📚 Learning: 2025-08-26T09:49:04.956Z
Learnt from: pengbowang-nv
PR: NVIDIA/TensorRT-LLM#7192
File: tests/integration/test_lists/test-db/l0_dgx_b200.yml:56-72
Timestamp: 2025-08-26T09:49:04.956Z
Learning: In TensorRT-LLM test configuration files, the test scheduling system handles wildcard matching with special rules that prevent duplicate test execution even when the same tests appear in multiple yaml files with overlapping GPU wildcards (e.g., "*b200*" and "*gb200*").

Applied to files:

  • tests/integration/test_lists/test-db/l0_dgx_b200.yml
📚 Learning: 2025-09-09T09:40:45.658Z
Learnt from: fredricz-20070104
PR: NVIDIA/TensorRT-LLM#7645
File: tests/integration/test_lists/qa/llm_function_core.txt:648-648
Timestamp: 2025-09-09T09:40:45.658Z
Learning: In TensorRT-LLM test lists, it's common and intentional for the same test to appear in multiple test list files when they serve different purposes (e.g., llm_function_core.txt for comprehensive core functionality testing and llm_function_core_sanity.txt for quick sanity checks). This duplication allows tests to be run in different testing contexts.

Applied to files:

  • tests/integration/test_lists/test-db/l0_dgx_b200.yml
🧬 Code graph analysis (2)
tests/integration/defs/accuracy/test_llm_api_pytorch.py (5)
tests/integration/defs/conftest.py (1)
  • llm_models_root (77-91)
tensorrt_llm/llmapi/llm_args.py (1)
  • KvCacheConfig (976-1110)
tensorrt_llm/quantization/mode.py (1)
  • QuantAlgo (23-47)
tensorrt_llm/sampling_params.py (1)
  • SamplingParams (126-512)
tests/integration/defs/accuracy/accuracy_core.py (5)
  • MMLU (276-290)
  • evaluate (147-206)
  • evaluate (712-722)
  • GSM8K (293-308)
  • GPQADiamond (311-324)
tensorrt_llm/_torch/models/modeling_llama.py (2)
tensorrt_llm/functional.py (1)
  • AllReduceFusionOp (3888-3897)
cpp/tensorrt_llm/kernels/customAllReduceKernels.h (1)
  • AllReduceFusionOp (69-171)
🪛 Ruff (0.13.1)
tensorrt_llm/_torch/models/modeling_llama_min_latency.py

822-822: Undefined name elf

(F821)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (4)
tests/integration/test_lists/test-db/l0_dgx_b200.yml (1)

54-54: LGTM: new TP2PP2 FP8 test correctly registered in 4-GPU B200 pre-merge block

This aligns with the added test and target hardware.

tensorrt_llm/_torch/models/modeling_llama.py (2)

564-571: Good guard to avoid missing input_scale and reset fusion op

This prevents illegal access when scale isn’t present and safely falls back to RESIDUAL_RMS_NORM.


780-787: Mirror guard looks correct for LlamaDecoderLayer

Consistent hasattr check and fallback op avoid attribute errors and maintain correctness.

tensorrt_llm/_torch/models/modeling_llama_min_latency.py (1)

821-834: Search for remaining elf. typos

The previous rg returned no output — cannot confirm absence of other occurrences. Re-run with a literal search to be safe: rg -n --type=py -C1 -F 'elf.'

@tensorrt-cicd
Copy link
Collaborator

PR_Github #19799 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #19799 [ run ] completed with state FAILURE
/LLM/main/L0_MergeRequest_PR pipeline #14895 completed with status: 'FAILURE'

@hyukn
Copy link
Collaborator Author

hyukn commented Sep 25, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #19856 [ run ] triggered by Bot

Copy link
Collaborator

@Superjomn Superjomn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@Superjomn Superjomn enabled auto-merge (squash) September 25, 2025 02:50
@tensorrt-cicd
Copy link
Collaborator

PR_Github #19856 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #14943 completed with status: 'FAILURE'

@hyukn
Copy link
Collaborator Author

hyukn commented Sep 25, 2025

/bot run

@hyukn
Copy link
Collaborator Author

hyukn commented Sep 26, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20028 [ run ] triggered by Bot

@hyukn hyukn requested review from a team as code owners September 26, 2025 13:34
@hyukn hyukn changed the base branch from main to release/1.1 September 26, 2025 13:34
@hyukn hyukn requested a review from a team as a code owner September 26, 2025 13:34
@tensorrt-cicd
Copy link
Collaborator

PR_Github #20028 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #15086 completed with status: 'SUCCESS'
Pipeline passed with automatic retried tests. Check the rerun report for details.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20210 [ run ] completed with state SUCCESS
/LLM/release-1.1/L0_MergeRequest_PR pipeline #10 completed with status: 'FAILURE'

@hyukn
Copy link
Collaborator Author

hyukn commented Sep 29, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20237 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20237 [ run ] completed with state SUCCESS
/LLM/release-1.1/L0_MergeRequest_PR pipeline #13 completed with status: 'FAILURE'

@hyukn
Copy link
Collaborator Author

hyukn commented Sep 29, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20247 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20247 [ run ] completed with state SUCCESS
/LLM/release-1.1/L0_MergeRequest_PR pipeline #15 completed with status: 'FAILURE'

@hyukn
Copy link
Collaborator Author

hyukn commented Sep 29, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20269 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20269 [ run ] completed with state SUCCESS
/LLM/release-1.1/L0_MergeRequest_PR pipeline #20 completed with status: 'FAILURE'

@hyukn
Copy link
Collaborator Author

hyukn commented Oct 7, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20693 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20693 [ run ] completed with state SUCCESS
/LLM/release-1.1/L0_MergeRequest_PR pipeline #38 completed with status: 'FAILURE'

@hyukn
Copy link
Collaborator Author

hyukn commented Oct 7, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20709 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20709 [ run ] completed with state SUCCESS
/LLM/release-1.1/L0_MergeRequest_PR pipeline #40 completed with status: 'FAILURE'

@hyukn
Copy link
Collaborator Author

hyukn commented Oct 7, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20720 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20720 [ run ] completed with state SUCCESS
/LLM/release-1.1/L0_MergeRequest_PR pipeline #42 completed with status: 'FAILURE'

@hyukn
Copy link
Collaborator Author

hyukn commented Oct 7, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20748 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20748 [ run ] completed with state SUCCESS
/LLM/release-1.1/L0_MergeRequest_PR pipeline #43 completed with status: 'SUCCESS'
Pipeline passed with automatic retried tests. Check the rerun report for details.

@Superjomn Superjomn merged commit 1ca84e1 into NVIDIA:release/1.1 Oct 8, 2025
5 checks passed
mikeiovine pushed a commit to mikeiovine/TensorRT-LLM that referenced this pull request Oct 8, 2025
mikeiovine pushed a commit to mikeiovine/TensorRT-LLM that referenced this pull request Oct 8, 2025
…not provided in Llama3/4. (NVIDIA#7960)

Signed-off-by: Yukun He <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>
mikeiovine pushed a commit to mikeiovine/TensorRT-LLM that referenced this pull request Oct 8, 2025
mikeiovine pushed a commit to mikeiovine/TensorRT-LLM that referenced this pull request Oct 9, 2025
mikeiovine pushed a commit to mikeiovine/TensorRT-LLM that referenced this pull request Oct 9, 2025
…not provided in Llama3/4. (NVIDIA#7960)

Signed-off-by: Yukun He <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants