Skip to content

Conversation

@hyukn
Copy link
Collaborator

@hyukn hyukn commented Nov 12, 2025

Summary by CodeRabbit

  • New Features

    • Added use_cudagraph option to control CUDA graph-based profiling during autotuning.
  • Improvements

    • Increased default profiling iterations from 10 to 30 for more accurate performance measurements.
    • Enhanced profiling path to support CUDA graphs for improved efficiency.

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@hyukn hyukn requested a review from a team as a code owner November 12, 2025 07:59
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Nov 12, 2025

📝 Walkthrough

Walkthrough

Added CUDA graph support to autotuning profiling with a new use_cudagraph option in TuningConfig. Profiling logic updated to conditionally use CUDA graphs for repeated kernel execution, with fallback to sequential runs when disabled. Default repeat count increased from 10 to 30.

Changes

Cohort / File(s) Summary
CUDA graph profiling support
tensorrt_llm/_torch/autotuner.py
Added use_cudagraph: bool = True field to TuningConfig. Updated _profile_single_kernel to accept use_cudagraph parameter and conditionally create/replay CUDA graphs vs. sequential runs with host timing. Updated _profile_runners to forward flag. Increased AutoTuner.__init__ default repeat from 10 to 30. Removed profiling_debug attribute initialization.

Sequence Diagram(s)

sequenceDiagram
    participant Profiler as Profiler
    participant Kernel as _profile_single_kernel
    participant CUDA as CUDA Runtime
    
    Note over Profiler,CUDA: With use_cudagraph=True (new path)
    Profiler->>Kernel: call with use_cudagraph=True
    Kernel->>CUDA: create stream context
    Kernel->>CUDA: warmup runs in stream
    Kernel->>CUDA: create CUDAGraph
    loop repeated execution
        Kernel->>CUDA: replay graph
    end
    Kernel->>CUDA: synchronize & record time
    Kernel-->>Profiler: return timing
    
    Note over Profiler,CUDA: With use_cudagraph=False (legacy path)
    Profiler->>Kernel: call with use_cudagraph=False
    loop repeated runs
        Kernel->>CUDA: launch kernel
        Kernel->>CUDA: host delay (stream_delay_micro_secs)
    end
    Kernel->>CUDA: synchronize & record time
    Kernel-->>Profiler: return timing
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

  • Logic complexity: Conditional branching between two profiling paths (CUDA graphs vs. sequential) requires careful verification of timing correctness and synchronization behavior.
  • API surface changes: Multiple method signatures updated (AutoTuner.__init__, _profile_single_kernel, callers in _profile_runners); verify parameter forwarding is consistent throughout.
  • CUDA semantics: Review graph creation, replay logic, and synchronization points to ensure correct profiling measurements and avoid race conditions.

Pre-merge checks and finishing touches

❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Description check ⚠️ Warning The PR description is incomplete with only the template structure provided and critical sections (Description and Test Coverage) left blank without any substantive content explaining the changes or test coverage. Fill in the Description section explaining the issue and solution, and provide the Test Coverage section listing relevant tests that safeguard the changes.
Docstring Coverage ⚠️ Warning Docstring coverage is 33.33% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The PR title clearly and specifically describes the main change: using CUDAGraph to improve tuning accuracy in AutoTuner, with proper JIRA ticket reference and feature type.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
tensorrt_llm/_torch/autotuner.py (1)

859-881: Update docstring to document the new parameter.

The use_cudagraph parameter was added to the method signature but is not documented in the docstring. The Args section should include this new parameter.

Apply this diff to update the docstring:

     def _profile_single_kernel(
         self,
         runner: TunableRunner,
         inputs: List[torch.Tensor],
         tactic: Any,
         use_cudagraph: bool = True,
         **kwargs,
     ) -> float:
         """Profile a single kernel implementation for performance measurement.
 
         Args:
             runner (TunableRunner): The runner implementation to profile
             inputs (List[torch.Tensor]): Input tensors for the kernel
             tactic (Any): Tactic to use for this profiling run
+            use_cudagraph (bool): Whether to use CUDA graphs for profiling (default: True)
+                When True, captures and replays kernels using CUDA graphs for more accurate timing.
+                When False, uses sequential execution with stream delay to reduce host overhead.
 
         Returns:
             Average execution time in milliseconds
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 96132b4 and 5572f86.

📒 Files selected for processing (1)
  • tensorrt_llm/_torch/autotuner.py (5 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{h,hpp,hh,hxx,cpp,cxx,cc,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Use only spaces, no tabs; indent with 4 spaces.

Files:

  • tensorrt_llm/_torch/autotuner.py
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Python code must target Python 3.8+.
Indent Python code with 4 spaces; do not use tabs.
Maintain module namespace when importing; prefer 'from package.subpackage import foo' then 'foo.SomeClass()' instead of importing the class directly.
Python filenames should be snake_case (e.g., some_file.py).
Python classes use PascalCase names.
Functions and methods use snake_case names.
Local variables use snake_case; prefix 'k' for variables that start with a number (e.g., k_99th_percentile).
Global variables use upper SNAKE_CASE prefixed with 'G' (e.g., G_MY_GLOBAL).
Constants use upper SNAKE_CASE (e.g., MY_CONSTANT).
Avoid shadowing variables from an outer scope.
Initialize all externally visible members of a class in the constructor.
Prefer docstrings for interfaces that may be used outside a file; comments for in-function or file-local interfaces.
Use Google-style docstrings for classes and functions (Sphinx-parsable).
Document attributes and variables inline so they render under the class/function docstring.
Avoid reflection when a simpler, explicit approach suffices (e.g., avoid dict(**locals()) patterns).
In try/except, catch the most specific exceptions possible.
For duck-typing try/except, keep the try body minimal and use else for the main logic.

Files:

  • tensorrt_llm/_torch/autotuner.py
**/*.{cpp,cxx,cc,h,hpp,hh,hxx,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Prepend the NVIDIA Apache-2.0 copyright header with current year to the top of all source files (e.g., .cpp, .h, .cu, .py).

Files:

  • tensorrt_llm/_torch/autotuner.py
🧠 Learnings (2)
📓 Common learnings
Learnt from: MrGeva
Repo: NVIDIA/TensorRT-LLM PR: 7219
File: tensorrt_llm/_torch/auto_deploy/compile/backends/torch_cudagraph.py:162-168
Timestamp: 2025-09-04T07:33:10.618Z
Learning: When users explicitly provide cuda_graph_batch_sizes in TorchCudagraphCompiler, respect their choices and only sanitize the values (clamp, dedupe, sort) without forcing additional batch sizes like 1 or max_batch_size. Only add commonly-used batch sizes when falling back to the heuristic.
📚 Learning: 2025-08-26T09:37:10.463Z
Learnt from: jiaganc
Repo: NVIDIA/TensorRT-LLM PR: 7031
File: tensorrt_llm/bench/dataclasses/configuration.py:90-104
Timestamp: 2025-08-26T09:37:10.463Z
Learning: In TensorRT-LLM, the `get_pytorch_perf_config()` method returns `self.pytorch_config` which can contain default `cuda_graph_config` values, so `llm_args` may already have this config before the extra options processing.

Applied to files:

  • tensorrt_llm/_torch/autotuner.py
🧬 Code graph analysis (1)
tensorrt_llm/_torch/autotuner.py (1)
tensorrt_llm/_torch/pyexecutor/model_engine.py (2)
  • warmup (86-92)
  • warmup (522-550)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (3)
tensorrt_llm/_torch/autotuner.py (3)

102-102: LGTM! Field addition looks good.

The new use_cudagraph field appropriately controls CUDA graph usage during profiling, with a sensible default of True for improved accuracy.


813-818: LGTM! Parameter forwarding is correct.

The use_cudagraph flag is appropriately forwarded from tuning_config to the profiling method.


888-916: LGTM! Profiling logic is correctly implemented.

The profiling flow appropriately handles both CUDA graph and non-graph paths:

  • Warmup runs occur first in the stream context
  • CUDA graphs capture repeated executions when enabled
  • Stream delay is correctly applied only for non-graph profiling to reduce host overhead
  • Timing and synchronization are properly implemented for both paths

@hyukn
Copy link
Collaborator Author

hyukn commented Nov 12, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24279 [ run ] triggered by Bot. Commit: 5572f86

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24279 [ run ] completed with state SUCCESS. Commit: 5572f86
/LLM/main/L0_MergeRequest_PR pipeline #18315 completed with status: 'FAILURE'

@hyukn hyukn force-pushed the feat/autotune_profile_cudagraph branch from 5572f86 to 90e428e Compare November 13, 2025 03:44
@hyukn
Copy link
Collaborator Author

hyukn commented Nov 13, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24384 [ run ] triggered by Bot. Commit: 90e428e

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24384 [ run ] completed with state SUCCESS. Commit: 90e428e
/LLM/main/L0_MergeRequest_PR pipeline #18400 completed with status: 'FAILURE'

@hyukn
Copy link
Collaborator Author

hyukn commented Nov 13, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24433 [ run ] triggered by Bot. Commit: 90e428e

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24433 [ run ] completed with state SUCCESS. Commit: 90e428e
/LLM/main/L0_MergeRequest_PR pipeline #18435 completed with status: 'FAILURE'

@hyukn
Copy link
Collaborator Author

hyukn commented Nov 13, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24482 [ run ] triggered by Bot. Commit: 90e428e

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24482 [ run ] completed with state SUCCESS. Commit: 90e428e
/LLM/main/L0_MergeRequest_PR pipeline #18476 completed with status: 'FAILURE'

@hyukn hyukn force-pushed the feat/autotune_profile_cudagraph branch from 90e428e to f4bc776 Compare November 14, 2025 00:20
@hyukn
Copy link
Collaborator Author

hyukn commented Nov 14, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24517 [ run ] triggered by Bot. Commit: f4bc776

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24517 [ run ] completed with state SUCCESS. Commit: f4bc776
/LLM/main/L0_MergeRequest_PR pipeline #18505 completed with status: 'FAILURE'

@hyukn hyukn force-pushed the feat/autotune_profile_cudagraph branch 3 times, most recently from e15ff36 to 7a3429e Compare November 17, 2025 06:15
@hyukn
Copy link
Collaborator Author

hyukn commented Nov 17, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24728 [ run ] completed with state FAILURE. Commit: 7a3429e

@hyukn
Copy link
Collaborator Author

hyukn commented Nov 17, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24768 [ run ] triggered by Bot. Commit: 7a3429e

@hyukn hyukn force-pushed the feat/autotune_profile_cudagraph branch from 7b62cec to 447f0f0 Compare November 18, 2025 01:47
@hyukn
Copy link
Collaborator Author

hyukn commented Nov 18, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24822 [ run ] triggered by Bot. Commit: 447f0f0

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24768 [ run ] completed with state ABORTED. Commit: 7a3429e
LLM/main/L0_MergeRequest_PR #18685 (Blue Ocean) completed with status: ABORTED

@hyukn hyukn force-pushed the feat/autotune_profile_cudagraph branch from 447f0f0 to d623c7d Compare November 18, 2025 02:10
@hyukn
Copy link
Collaborator Author

hyukn commented Nov 18, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24825 [ run ] triggered by Bot. Commit: d623c7d

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24822 [ run ] completed with state ABORTED. Commit: 447f0f0
LLM/main/L0_MergeRequest_PR #18734 (Blue Ocean) completed with status: ABORTED

@hyukn hyukn force-pushed the feat/autotune_profile_cudagraph branch from d623c7d to b964920 Compare November 18, 2025 03:39
@hyukn
Copy link
Collaborator Author

hyukn commented Nov 18, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24839 [ run ] triggered by Bot. Commit: b964920

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24825 [ run ] completed with state ABORTED. Commit: d623c7d
LLM/main/L0_MergeRequest_PR #18736 (Blue Ocean) completed with status: ABORTED

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24839 [ run ] completed with state SUCCESS. Commit: b964920
/LLM/main/L0_MergeRequest_PR pipeline #18749 completed with status: 'FAILURE'

@hyukn
Copy link
Collaborator Author

hyukn commented Nov 18, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24915 [ run ] triggered by Bot. Commit: b964920

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24915 [ run ] completed with state SUCCESS. Commit: b964920
/LLM/main/L0_MergeRequest_PR pipeline #18815 completed with status: 'FAILURE'

@hyukn
Copy link
Collaborator Author

hyukn commented Nov 19, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24967 [ run ] triggered by Bot. Commit: b964920

@kaiyux kaiyux enabled auto-merge (squash) November 19, 2025 02:10
@tensorrt-cicd
Copy link
Collaborator

PR_Github #24967 [ run ] completed with state SUCCESS. Commit: b964920
/LLM/main/L0_MergeRequest_PR pipeline #18861 completed with status: 'FAILURE'

@hyukn
Copy link
Collaborator Author

hyukn commented Nov 19, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25060 [ run ] triggered by Bot. Commit: 3c31a63

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25060 [ run ] completed with state SUCCESS. Commit: 3c31a63
/LLM/main/L0_MergeRequest_PR pipeline #18940 completed with status: 'SUCCESS'

@kaiyux kaiyux merged commit b6bced8 into NVIDIA:main Nov 20, 2025
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants