Skip to content

Conversation

@Funatiq
Copy link
Collaborator

@Funatiq Funatiq commented Sep 17, 2025

Summary by CodeRabbit

  • New Features

    • Added support for specifying a checkpoint format during input processing, enabling smoother use of non-Hugging Face and multimodal checkpoints.
    • Introduced an optional configuration to bypass auto-loading of model configs when not needed, improving robustness.
  • Documentation

    • Updated usage to highlight the new optional checkpoint format setting for input processing.

Description

  • Modified the create_input_processor function to accept a checkpoint_format parameter, defaulting to "HF".
  • The function now conditionally attempts to load the model configuration based on the specified format.

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@Funatiq
Copy link
Collaborator Author

Funatiq commented Sep 17, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #19014 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #19014 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #14259 completed with status: 'FAILURE'

@Funatiq
Copy link
Collaborator Author

Funatiq commented Sep 18, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #19166 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #19166 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #14385 completed with status: 'SUCCESS'
Pipeline passed with automatic retried tests. Check the rerun report for details.

@Funatiq Funatiq marked this pull request as ready for review September 29, 2025 14:47
@Funatiq Funatiq requested a review from a team as a code owner September 29, 2025 14:47
@Funatiq Funatiq requested a review from hchings September 29, 2025 14:47
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Sep 29, 2025

📝 Walkthrough

Walkthrough

Introduces an optional checkpoint_format parameter to create_input_processor, altering control flow to conditionally skip HF config loading for non-HF formats. Updates LLM Torch backend to read checkpoint_format from args and pass it to input processor creation. Defaults preserve previous behavior when checkpoint_format is None or "HF".

Changes

Cohort / File(s) Summary of Changes
Inputs registry API and control flow
tensorrt_llm/inputs/registry.py
Added optional parameter checkpoint_format: Optional[str] = None to create_input_processor. Conditionalizes ModelConfig loading: only attempts from_pretrained when checkpoint_format is None or "HF". For other formats, skips config load and falls back to DefaultInputProcessor if no model-specific processor is resolved.
LLM Torch backend integration
tensorrt_llm/llmapi/llm.py
Reads checkpoint_format from self.args and passes it to create_input_processor(self._hf_model_dir, self.tokenizer, checkpoint_format). Aligns callsite with the updated input-processor API.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  actor Client
  participant LLM as LLM (Torch backend)
  participant Inputs as inputs.create_input_processor
  participant HF as ModelConfig.from_pretrained

  Client->>LLM: build model
  LLM->>LLM: read args.checkpoint_format
  LLM->>Inputs: create_input_processor(model_dir, tokenizer, checkpoint_format)

  alt checkpoint_format is None or "HF"
    Inputs->>HF: try load ModelConfig
    alt load succeeds
      HF-->>Inputs: model_config
      Inputs->>Inputs: select processor by model type
    else load fails
      HF-->>Inputs: error
      Inputs->>Inputs: fallback to DefaultInputProcessor
    end
  else non-HF format
    Inputs->>Inputs: skip HF config load
    Inputs->>Inputs: fallback/resolve appropriate processor (may default)
  end

  Inputs-->>LLM: InputProcessor instance
  LLM-->>Client: proceed with preprocessing
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Pre-merge checks and finishing touches

❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 66.67% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
Description Check ⚠️ Warning The pull request description is incomplete compared to the required template. While the "Description" section is present with brief bullet points explaining the checkpoint_format parameter, the "Test Coverage" section is entirely empty with no details provided about relevant tests. Additionally, the PR checklist shows that most key verification items are not marked as completed, with only the final generic checkbox marked, indicating that critical pre-submission items were not properly verified. This represents a largely incomplete submission relative to the template requirements. The author should complete the "Test Coverage" section by clearly listing which tests cover the new checkpoint_format parameter and the conditional model configuration loading behavior. Additionally, the PR checklist items should be reviewed and appropriately marked as complete or incomplete based on actual compliance with the TRT-LLM coding guidelines, test coverage, dependency scanning, and documentation requirements. This will ensure the PR meets the repository's documentation standards before merging.
✅ Passed checks (1 passed)
Check name Status Explanation
Title Check ✅ Passed The pull request title "[TRTLLM-8714][fix] update create_input_processor to handle custom checkpoint format" follows the required template format with a valid JIRA ticket [TRTLLM-8714], the type [fix], and a clear, specific summary. The title directly aligns with the main changes in the pull request, which modify the create_input_processor function to accept and conditionally use a new checkpoint_format parameter. The title is concise, avoids vague terms, and accurately describes the primary objective of the changeset.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
tensorrt_llm/inputs/registry.py (1)

1-1: Add NVIDIA Apache-2.0 header (2025).

Per coding guidelines, prepend the NVIDIA Apache-2.0 copyright header with current year to all .py files.

+# Copyright (c) 2025, NVIDIA CORPORATION. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
tensorrt_llm/llmapi/llm.py (2)

1-1: Add NVIDIA Apache-2.0 header (2025).

Apply the standard header at top of file.

+# Copyright (c) 2025, NVIDIA CORPORATION. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.

960-978: Add checkpoint_format to all create_input_processor calls
In tensorrt_llm/llmapi/llm.py (lines 807–811) and tensorrt_llm/llmapi/mm_encoder.py (lines 54–56), include the checkpoint_format argument (e.g. getattr(self.args, "checkpoint_format", None)) when invoking create_input_processor.

🧹 Nitpick comments (4)
tensorrt_llm/inputs/registry.py (3)

411-418: Polish API: add return type, expand docstring, and follow module-namespace import style.

  • Add return type hint (InputProcessor).
  • Expand docstring with Args/Returns and default behavior.
  • Import modules, not symbols, per guidelines.
-def create_input_processor(model_path_or_dir: str,
-                           tokenizer,
-                           checkpoint_format: Optional[str] = None):
-    """Create an input processor for a specific model.
-
-    If checkpoint_format is not "HF", fall back to DefaultInputProcessor.
-    """
-    from tensorrt_llm._torch.model_config import ModelConfig
-    from tensorrt_llm._torch.models import get_model_architecture
+def create_input_processor(model_path_or_dir: str,
+                           tokenizer,
+                           checkpoint_format: Optional[str] = None) -> InputProcessor:
+    """Create an input processor for a specific model.
+
+    Args:
+        model_path_or_dir: Path or repo id used to locate pretrained config/tokenizer.
+        tokenizer: Tokenizer instance.
+        checkpoint_format: Checkpoint format identifier. "HF" uses Hugging Face-style
+            config loading; any other value skips HF config loading. None is treated as "HF".
+
+    Returns:
+        An InputProcessor implementation (model-specific if registered; otherwise DefaultInputProcessor).
+    """
+    from tensorrt_llm._torch import model_config as tllm_model_config
+    from tensorrt_llm._torch import models as tllm_models

421-445: Make format check case-insensitive, log the branch, broaden exception, and pass through model_path on fallback.

  • Case-insensitive "HF" check.
  • Debug logs to clarify behavior.
  • Catch OSError (alias of EnvironmentError) to be future-proof; drop unused local.
  • Pass model_path_or_dir to DefaultInputProcessor for context.
-    model_config = None
-
-    if checkpoint_format is None or checkpoint_format == "HF":
-        try:
-            config = ModelConfig.from_pretrained(model_path_or_dir,
-                                                 trust_remote_code=True)
-            model_config = config.pretrained_config
-        except (ValueError, EnvironmentError):
-            config = None
+    model_config = None
+    fmt_is_hf = (checkpoint_format or "HF").upper() == "HF"
+    if not fmt_is_hf:
+        logger.debug(f"checkpoint_format={checkpoint_format!r}; skipping HF config load.")
+    if fmt_is_hf:
+        try:
+            config = tllm_model_config.ModelConfig.from_pretrained(
+                model_path_or_dir, trust_remote_code=True)
+            model_config = config.pretrained_config
+        except (ValueError, EnvironmentError, OSError) as e:
+            logger.debug(f"Unable to load HF config from {model_path_or_dir!r}: {e}. Falling back.")
 
     if model_config is not None:
         try:
-            model_cls, _ = get_model_architecture(model_config)
+            model_cls, _ = tllm_models.get_model_architecture(model_config)
             input_processor_cls = INPUT_PROCESSOR_REGISTRY._input_processors_cls_by_model_type \
                 .get(model_cls)
         except RuntimeError:  # unregistered model
             logger.info("Unregistered model, using DefaultInputProcessor")
             input_processor_cls = None
         if input_processor_cls is not None:
             return input_processor_cls(model_path_or_dir,
                                        model_config,
                                        tokenizer,
                                        trust_remote_code=True)
 
-    return DefaultInputProcessor(None, None, tokenizer)
+    return DefaultInputProcessor(model_path_or_dir, None, tokenizer)

411-446: Add basic unit tests to lock HF vs non‑HF behavior.

Recommend covering:

  • checkpoint_format None/"HF" loads HF config and returns registered processor when available.
  • checkpoint_format "custom" skips HF and returns DefaultInputProcessor.
  • Graceful fallback when HF config load fails.

Would you like me to scaffold pytest tests for these paths?

tensorrt_llm/llmapi/llm.py (1)

809-812: Parity: pass checkpoint_format in TRT backend too to avoid unnecessary HF config attempts.

This keeps behavior consistent with PyTorch backend and skips HF config load when using custom formats.

-        self.input_processor = create_input_processor(self._hf_model_dir,
-                                                      self.tokenizer)
+        checkpoint_format = getattr(self.args, "checkpoint_format", None)
+        self.input_processor = create_input_processor(self._hf_model_dir,
+                                                      self.tokenizer,
+                                                      checkpoint_format)
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a55251b and e92f6c7.

📒 Files selected for processing (2)
  • tensorrt_llm/inputs/registry.py (1 hunks)
  • tensorrt_llm/llmapi/llm.py (1 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{h,hpp,hh,hxx,cpp,cxx,cc,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Use only spaces, no tabs; indent with 4 spaces.

Files:

  • tensorrt_llm/llmapi/llm.py
  • tensorrt_llm/inputs/registry.py
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Python code must target Python 3.8+.
Indent Python code with 4 spaces; do not use tabs.
Maintain module namespace when importing; prefer 'from package.subpackage import foo' then 'foo.SomeClass()' instead of importing the class directly.
Python filenames should be snake_case (e.g., some_file.py).
Python classes use PascalCase names.
Functions and methods use snake_case names.
Local variables use snake_case; prefix 'k' for variables that start with a number (e.g., k_99th_percentile).
Global variables use upper SNAKE_CASE prefixed with 'G' (e.g., G_MY_GLOBAL).
Constants use upper SNAKE_CASE (e.g., MY_CONSTANT).
Avoid shadowing variables from an outer scope.
Initialize all externally visible members of a class in the constructor.
Prefer docstrings for interfaces that may be used outside a file; comments for in-function or file-local interfaces.
Use Google-style docstrings for classes and functions (Sphinx-parsable).
Document attributes and variables inline so they render under the class/function docstring.
Avoid reflection when a simpler, explicit approach suffices (e.g., avoid dict(**locals()) patterns).
In try/except, catch the most specific exceptions possible.
For duck-typing try/except, keep the try body minimal and use else for the main logic.

Files:

  • tensorrt_llm/llmapi/llm.py
  • tensorrt_llm/inputs/registry.py
**/*.{cpp,cxx,cc,h,hpp,hh,hxx,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Prepend the NVIDIA Apache-2.0 copyright header with current year to the top of all source files (e.g., .cpp, .h, .cu, .py).

Files:

  • tensorrt_llm/llmapi/llm.py
  • tensorrt_llm/inputs/registry.py
🧬 Code graph analysis (2)
tensorrt_llm/llmapi/llm.py (3)
tensorrt_llm/_torch/models/checkpoints/base_checkpoint_loader.py (1)
  • checkpoint_format (50-51)
tensorrt_llm/_torch/models/checkpoints/hf/checkpoint_loader.py (1)
  • checkpoint_format (74-75)
tensorrt_llm/inputs/registry.py (1)
  • create_input_processor (411-445)
tensorrt_llm/inputs/registry.py (1)
tensorrt_llm/_torch/models/modeling_utils.py (1)
  • get_model_architecture (708-720)
🔇 Additional comments (2)
tensorrt_llm/inputs/registry.py (1)

411-446: All create_input_processor callsites pass at most three positional arguments; the new optional checkpoint_format parameter won’t break existing calls.

tensorrt_llm/llmapi/llm.py (1)

974-977: LGTM: checkpoint_format default present
TorchLlmArgs defines checkpoint_format with default=None and a post-validator that sets it to "HF" when unset, so threading it into create_input_processor is safe.

@Funatiq Funatiq marked this pull request as draft October 9, 2025 08:20
@Funatiq
Copy link
Collaborator Author

Funatiq commented Oct 9, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20894 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20894 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #15805 completed with status: 'FAILURE'

@Funatiq
Copy link
Collaborator Author

Funatiq commented Oct 9, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20906 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20906 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #15816 completed with status: 'FAILURE'

@Funatiq
Copy link
Collaborator Author

Funatiq commented Oct 9, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20912 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20912 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #15817 completed with status: 'FAILURE'

@Funatiq Funatiq force-pushed the fix/custom_loading branch from d95478f to ecb5914 Compare October 10, 2025 11:41
@Funatiq
Copy link
Collaborator Author

Funatiq commented Oct 10, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21029 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21029 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #15899 completed with status: 'FAILURE'

@Funatiq
Copy link
Collaborator Author

Funatiq commented Oct 10, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21035 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21035 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #15901 completed with status: 'FAILURE'

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21699 [ run ] completed with state SUCCESS. Commit: 1d2f191
/LLM/main/L0_MergeRequest_PR pipeline #16350 completed with status: 'FAILURE'

@Funatiq
Copy link
Collaborator Author

Funatiq commented Oct 20, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21897 [ run ] triggered by Bot. Commit: dad1982

@Superjomn
Copy link
Collaborator

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21907 [ run ] triggered by Bot. Commit: dad1982

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21897 [ run ] completed with state ABORTED. Commit: dad1982
LLM/main/L0_MergeRequest_PR #16508 (Blue Ocean) completed with status: ABORTED

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21907 [ run ] completed with state SUCCESS. Commit: dad1982
/LLM/main/L0_MergeRequest_PR pipeline #16514 completed with status: 'FAILURE'

@Funatiq
Copy link
Collaborator Author

Funatiq commented Oct 21, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22032 [ run ] triggered by Bot. Commit: dad1982

… format

- Modified the create_input_processor function to accept a checkpoint_format parameter, defaulting to "HF".
- Add detailed parameter descriptions and return type clarification.
- The function now conditionally attempts to load the model configuration based on the specified format.

Signed-off-by: Robin Kobus <[email protected]>
- Added debug logging for exceptions when loading the HF model configuration.
- Included a fallback message when skipping the HF config load based on checkpoint format.

Signed-off-by: Robin Kobus <[email protected]>
…ading

- Get checkpoint_format in MultimodalEncoder and pass it to create_input_processor.

Signed-off-by: Robin Kobus <[email protected]>
@Funatiq Funatiq force-pushed the fix/custom_loading branch from dad1982 to f288868 Compare October 21, 2025 10:47
@Funatiq
Copy link
Collaborator Author

Funatiq commented Oct 21, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22050 [ run ] triggered by Bot. Commit: f288868

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22032 [ run ] completed with state ABORTED. Commit: dad1982
LLM/main/L0_MergeRequest_PR #16611 (Blue Ocean) completed with status: ABORTED

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22050 [ run ] completed with state SUCCESS. Commit: f288868
/LLM/main/L0_MergeRequest_PR pipeline #16626 completed with status: 'FAILURE'

@Funatiq
Copy link
Collaborator Author

Funatiq commented Oct 21, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22066 [ run ] triggered by Bot. Commit: f288868

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22066 [ run ] completed with state SUCCESS. Commit: f288868
/LLM/main/L0_MergeRequest_PR pipeline #16639 completed with status: 'SUCCESS'

Copy link
Collaborator

@hchings hchings left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left a nit. Otherwise LGTM.

@Funatiq Funatiq merged commit 3a5845e into NVIDIA:main Oct 23, 2025
5 checks passed
@Funatiq Funatiq deleted the fix/custom_loading branch October 23, 2025 08:28
yufeiwu-nv pushed a commit to yufeiwu-nv/TensorRT-LLM that referenced this pull request Oct 24, 2025
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Nov 1, 2025
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Nov 3, 2025
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Nov 3, 2025
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Nov 3, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants