Skip to content

Conversation

@yunruis
Copy link
Contributor

@yunruis yunruis commented Oct 31, 2025

Summary by CodeRabbit

Release Notes

  • Chores
    • Optimized internal memory management and resource cleanup in singleton object caching logic.

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@yunruis yunruis requested a review from a team as a code owner October 31, 2025 06:04
@yunruis
Copy link
Contributor Author

yunruis commented Oct 31, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #23141 [ run ] triggered by Bot. Commit: 4b7e1cc

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 31, 2025

📝 Walkthrough

Walkthrough

Modified PerCudaCtxPerThreadSingletonCreator class to store the observer map as a dynamically allocated pointer rather than an inline member. Updated all map access patterns to dereference the pointer, added explicit destructor for cleanup, and added nullptr assignments in CUDA handle deleter lambdas.

Changes

Cohort / File(s) Summary
Singleton observer storage refactoring
cpp/tensorrt_llm/common/opUtils.cpp
Changed mObservers from inline map to heap-allocated pointer; constructor now allocates new map, destructor deletes under lock. Updated all observer access patterns: mObservers[key](*mObservers)[key], mObservers.find(key)mObservers->find(key). Added nullptr assignments in getCublasHandle and getCublasLtHandle deleter lambdas.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

  • Mechanical pattern changes applied consistently across observer access points (dereference pointer syntax)
  • Memory lifecycle changes localized to constructor/destructor initialization and cleanup
  • Deleter lambda modifications are straightforward (adding null assignments)
  • Primary risk: ensure all access patterns were converted consistently and pointer is always valid before dereference

Pre-merge checks and finishing touches

❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Description Check ⚠️ Warning The PR description is largely incomplete. While the PR template structure is present, the critical sections required by the template are left empty with only template comments and placeholders. The "Description" section (which should explain the issue and solution) contains only a template comment with no actual details. The "Test Coverage" section similarly contains only a template comment with no information about relevant tests. The PR Checklist is marked as complete, but none of the substantive description fields have been filled with actual content explaining the changes, their rationale, or testing strategy. The author should fill in the "Description" section with a clear explanation of the segmentation fault issue, the root cause, and how the destructor and observer management changes address it. Additionally, the "Test Coverage" section must list the specific tests (if any) that validate the fix or indicate how the fix was tested to ensure the segmentation fault is resolved. These substantive sections are essential for reviewers to understand the scope and validation of the changes.
Docstring Coverage ⚠️ Warning Docstring coverage is 40.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (1 passed)
Check name Status Explanation
Title Check ✅ Passed The PR title "[https://nvbugs/5606268][fix] Fix program exit segment fault triggered CublasMMWarpper deconstructor" follows the correct format specified in the template, with a valid NVBugs ID, a valid type ([fix]), and a clear summary. The title is specific and describes the main issue being addressed. The changes in the raw_summary align with this title—the modifications to the PerCudaCtxPerThreadSingletonCreator class's destructor, observer management, and explicit nulling of handles are all aimed at addressing memory lifecycle issues that could cause segmentation faults during program exit. The title is concise, readable, and clearly communicates the primary change from the developer's perspective.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

📜 Recent review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a1d9126 and 4b7e1cc.

📒 Files selected for processing (1)
  • cpp/tensorrt_llm/common/opUtils.cpp (6 hunks)
🧰 Additional context used
📓 Path-based instructions (5)
**/*.{h,hpp,hh,hxx,cpp,cxx,cc,cu,cuh}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.{h,hpp,hh,hxx,cpp,cxx,cc,cu,cuh}: Namespace closing braces must include a trailing comment with the namespace name (e.g., '} // namespace foo').
Prefer const or constexpr variables over #define for constants.
Declare variables that are not modified after initialization as const.
Avoid magic literals in code; except for 0, nullptr, true, false. Use named constants for comparisons and logic.
Use Allman brace style for formatting.
Place the semicolon of an empty for/while loop on a new line.
Bodies of switch/while/do-while/for must be compound statements (brace-delimited), and if/else must always be followed by brace-delimited statements.
Type names (e.g., classes) must be CamelCase starting with an uppercase letter (e.g., FooBar).
Local variables, methods, and namespaces use lowerCamelCase (e.g., localFooBar).
Non-magic-number global variables that are non-static and not in an anonymous namespace must be lowerCamelCase prefixed with 'g' (e.g., gDontUseGlobalFoos).
Non-magic-number globals that are static or in an anonymous namespace use lowerCamelCase prefixed with 's' (e.g., sMutableStaticGlobal).
Locally visible static variables use lowerCamelCase with 's' prefix (e.g., static std::once_flag sFlag).
Private/protected member variables use 'm' prefix with CamelCase (e.g., mNbFooValues). Public members may omit, but 'm' is encouraged for clarity.
Constants (enums, global constants, static constants, and function-scope magic/literal constants) use uppercase SNAKE_CASE with 'k' prefix (e.g., kDIGIT_NUM).
Function-scope constants that are not magic numbers or literals are named like non-constant variables (e.g., bool const pass = a && b).
If macros are necessary, name them in UPPER_SNAKE_CASE (e.g., FOO_VERSION) and prefer constants over #define.
Use LLVM clang-format; wrap lines at a maximum of 120 columns; use '// clang-format off/on' sparingly with justification.
Use smart pointers for heap allocations; prefer unique_ptr for sole ownership, shared_ptr for shared...

Files:

  • cpp/tensorrt_llm/common/opUtils.cpp
**/*.{cpp,cxx,cc,cu,h,hpp,hh,hxx,cuh}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

C++ filenames should be lowerCamelCase (first letter lowercase) and must be case-insensitive unique within a compilation target.

Files:

  • cpp/tensorrt_llm/common/opUtils.cpp
**/*.{h,hpp,hh,hxx,cpp,cxx,cc,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Use only spaces, no tabs; indent with 4 spaces.

Files:

  • cpp/tensorrt_llm/common/opUtils.cpp
**/*.{h,hpp,hh,hxx,cpp,cxx,cc}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.{h,hpp,hh,hxx,cpp,cxx,cc}: Prefer anonymous namespaces over 'static' for internal linkage of functions.
All templates (class/function/member/static) must be instantiated at least once; non-POD classes should have private data members.

Files:

  • cpp/tensorrt_llm/common/opUtils.cpp
**/*.{cpp,cxx,cc,h,hpp,hh,hxx,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Prepend the NVIDIA Apache-2.0 copyright header with current year to the top of all source files (e.g., .cpp, .h, .cu, .py).

Files:

  • cpp/tensorrt_llm/common/opUtils.cpp
🧠 Learnings (3)
📚 Learning: 2025-08-21T09:41:49.347Z
Learnt from: eopXD
Repo: NVIDIA/TensorRT-LLM PR: 6768
File: cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp:2010-2045
Timestamp: 2025-08-21T09:41:49.347Z
Learning: In cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp, updateSequenceCacheBlockOffsets is specifically for updating bookkeeping when blocks are added during the context phase, not for refreshing offsets after detach operations. During detach operations, GenerationRequest::removeFrontBlock handles the necessary cache block bookkeeping internally.

Applied to files:

  • cpp/tensorrt_llm/common/opUtils.cpp
📚 Learning: 2025-08-06T08:18:28.669Z
Learnt from: zhengd-nv
Repo: NVIDIA/TensorRT-LLM PR: 6633
File: cpp/tensorrt_llm/batch_manager/dataTransceiverImpl.cpp:145-155
Timestamp: 2025-08-06T08:18:28.669Z
Learning: In cpp/tensorrt_llm/batch_manager/dataTransceiverImpl.cpp, the existing `mMtxForMap` mutex in DataSenderImpl is sufficient to synchronize measurement file operations in the `release` method, as all file operations occur within the same critical section that protects the `mRequestToSession` map access.

Applied to files:

  • cpp/tensorrt_llm/common/opUtils.cpp
📚 Learning: 2025-08-20T06:48:45.368Z
Learnt from: eopXD
Repo: NVIDIA/TensorRT-LLM PR: 6768
File: cpp/include/tensorrt_llm/batch_manager/kvCacheManager.h:0-0
Timestamp: 2025-08-20T06:48:45.368Z
Learning: In cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp, updateSequenceCacheBlockOffsets is only called when adding a sequence, not during detach operations. During detach, the cache block bookkeeping is handled by GenerationRequest::removeFrontBlock.

Applied to files:

  • cpp/tensorrt_llm/common/opUtils.cpp
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@yunruis yunruis force-pushed the user/yunruis/fix_bug_5606268 branch from 4b7e1cc to 32ab7a6 Compare October 31, 2025 06:18
@yunruis
Copy link
Contributor Author

yunruis commented Oct 31, 2025

/bot run --disable-fail-fast

1 similar comment
@yunruis
Copy link
Contributor Author

yunruis commented Oct 31, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #23155 [ run ] triggered by Bot. Commit: 5205cfe

@tensorrt-cicd
Copy link
Collaborator

PR_Github #23154 [ ] completed with state ABORTED. Commit: 5205cfe

@tensorrt-cicd
Copy link
Collaborator

PR_Github #23141 [ run ] completed with state ABORTED. Commit: 4b7e1cc
LLM/release-1.1/L0_MergeRequest_PR #329 (Blue Ocean) completed with status: ABORTED

@yunruis yunruis force-pushed the user/yunruis/fix_bug_5606268 branch from 5205cfe to 871d3b8 Compare October 31, 2025 08:09
@yunruis
Copy link
Contributor Author

yunruis commented Oct 31, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #23162 [ run ] triggered by Bot. Commit: 871d3b8

@tensorrt-cicd
Copy link
Collaborator

PR_Github #23155 [ run ] completed with state ABORTED. Commit: 5205cfe
LLM/release-1.1/L0_MergeRequest_PR #331 (Blue Ocean) completed with status: ABORTED

@yunruis
Copy link
Contributor Author

yunruis commented Oct 31, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #23189 [ run ] triggered by Bot. Commit: 871d3b8

@tensorrt-cicd
Copy link
Collaborator

PR_Github #23162 [ run ] completed with state ABORTED. Commit: 871d3b8
LLM/release-1.1/L0_MergeRequest_PR #333 (Blue Ocean) completed with status: ABORTED

@tensorrt-cicd
Copy link
Collaborator

PR_Github #23189 [ run ] completed with state SUCCESS. Commit: 871d3b8
/LLM/release-1.1/L0_MergeRequest_PR pipeline #342 completed with status: 'FAILURE'

@yunruis
Copy link
Contributor Author

yunruis commented Nov 1, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #23253 [ run ] triggered by Bot. Commit: 871d3b8

@tensorrt-cicd
Copy link
Collaborator

PR_Github #23253 [ run ] completed with state SUCCESS. Commit: 871d3b8
/LLM/release-1.1/L0_MergeRequest_PR pipeline #356 completed with status: 'FAILURE'

@yunruis
Copy link
Contributor Author

yunruis commented Nov 2, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #23291 [ run ] triggered by Bot. Commit: 871d3b8

@tensorrt-cicd
Copy link
Collaborator

PR_Github #23291 [ run ] completed with state SUCCESS. Commit: 871d3b8
/LLM/release-1.1/L0_MergeRequest_PR pipeline #365 completed with status: 'FAILURE'

@yunruis yunruis force-pushed the user/yunruis/fix_bug_5606268 branch from 871d3b8 to c9443ef Compare November 2, 2025 15:31
@yunruis
Copy link
Contributor Author

yunruis commented Nov 2, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #23294 [ run ] triggered by Bot. Commit: c9443ef

@tensorrt-cicd
Copy link
Collaborator

PR_Github #23294 [ run ] completed with state SUCCESS. Commit: c9443ef
/LLM/release-1.1/L0_MergeRequest_PR pipeline #367 completed with status: 'SUCCESS'

@yunruis yunruis force-pushed the user/yunruis/fix_bug_5606268 branch from c9443ef to e9435e4 Compare November 3, 2025 02:48
Removed a skipped test case related to MPI single GPU.

Signed-off-by: yunruis <[email protected]>
@yunruis
Copy link
Contributor Author

yunruis commented Nov 3, 2025

/bot run --stage-list "A10-PackageSanityCheck-PY310-UB2204"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #23321 [ run ] triggered by Bot. Commit: aa32a27

@tensorrt-cicd
Copy link
Collaborator

PR_Github #23321 [ run ] completed with state SUCCESS. Commit: aa32a27
/LLM/release-1.1/L0_MergeRequest_PR pipeline #373 (Partly Tested) completed with status: 'SUCCESS'

Copy link
Collaborator

@ZhanruiSunCh ZhanruiSunCh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@yunruis
Copy link
Contributor Author

yunruis commented Nov 3, 2025

/bot reuse-pipeline

@tensorrt-cicd
Copy link
Collaborator

PR_Github #23344 [ reuse-pipeline ] triggered by Bot. Commit: fbf7dd6

@tensorrt-cicd
Copy link
Collaborator

PR_Github #23344 [ reuse-pipeline ] completed with state SUCCESS. Commit: fbf7dd6
Reusing PR_Github #23321 (Partly Tested) for commit fbf7dd6

@ZhanruiSunCh ZhanruiSunCh merged commit 07077fb into NVIDIA:release/1.1 Nov 3, 2025
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants