-
Notifications
You must be signed in to change notification settings - Fork 1.9k
[TRTLLM-8084][feat] Enhance the overlap shceduler for two-model spec decoding #8706
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
104b975 to
1cec683
Compare
|
/bot run --disable-fail-fast |
|
PR_Github #23393 [ run ] triggered by Bot. Commit: |
55a6c7a to
00e94d8
Compare
|
/bot run |
|
PR_Github #23514 [ run ] triggered by Bot. Commit: |
📝 WalkthroughWalkthroughThis pull request extends the speculative decoding pipeline to support overlap scheduling for draft model execution. It adds GPU-based draft token processing, pre-allocated CUDA buffers for draft state management, and propagates draft-token updates through the model engine and executor loops. New parameters track draft tokens and accepted states throughout the forward pass to enable efficient overlap-aware execution. Changes
Sequence DiagramsequenceDiagram
participant Executor
participant ModelEngine
participant DraftModel
participant TargetModel
rect rgb(200, 220, 255)
Note over Executor: Overlap Draft Iteration
Executor->>DraftModel: forward_draft_model(draft_batch, num_accepted_tokens_device)
DraftModel->>DraftModel: process draft tokens via GPU buffers
DraftModel-->>Executor: draft_outputs, draft_tensors
end
rect rgb(220, 240, 220)
Note over Executor: Process & Accumulate Draft Tokens
Executor->>Executor: _accept_draft_tokens(target_outputs, target_inputs)
Executor->>Executor: populate draft_tokens_accumulator per-request
Executor-->>Executor: return new_target_inputs
end
rect rgb(240, 220, 220)
Note over Executor: Submit Target Model with Draft Tokens
Executor->>TargetModel: forward(scheduled_requests, new_target_inputs)
TargetModel->>TargetModel: execute with updated input_ids from draft
TargetModel-->>Executor: target_outputs
end
rect rgb(255, 240, 200)
Note over Executor: Cleanup & Next Iteration
Executor->>Executor: cleanup_previous_draft_resources()
Executor->>Executor: store current batch as previous_draft_batch
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes
Possibly related PRs
Suggested reviewers
Pre-merge checks and finishing touches❌ Failed checks (1 inconclusive)
✅ Passed checks (1 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
tensorrt_llm/_torch/pyexecutor/sampler.py (2)
835-841: Fix NameError: draft_layer_id used before assignment.The condition references draft_layer_id before it’s defined, which will raise at runtime whenever use_dynamic_tree is True.
Apply this diff to define draft_layer_id before use and drop the later duplicate:
- if ( - spec_tree_manager.use_dynamic_tree - # FIXME: 'draft_layer_id' is undefined - and draft_layer_id == spec_tree_manager.max_draft_len - 1 # noqa: F821 - ): + draft_layer_id = spec_tree_manager.cur_draft_layer_idx + if ( + spec_tree_manager.use_dynamic_tree + and draft_layer_id == spec_tree_manager.max_draft_len - 1 + ): # TODO: Re-sample the draft tokens for the last layer. raise NotImplementedError("Dynamic tree is not fully supported yet.") @@ - draft_layer_id = spec_tree_manager.cur_draft_layer_idx
760-768: Out-of-bounds indexing confirmed for dynamic tree speculative decoding.The review comment correctly identifies a vulnerability. In
SpecTreeManager, the dynamic tree path initializestop_k_listas a single tensor with shape[dynamic_tree_max_topK]where each element equalsdynamic_tree_max_topK. Whensum()is called, this yieldsdynamic_tree_max_topK², which can easily exceedmax_total_draft_tokens + 1. The loop then indexesnew_tokens_list[step]with unboundedstepvalues, causing out-of-bounds access inadd_token()at line 301.The TODO comment on line 763 ("For the last layer of the dynamic tree, we need to resampling all the draft tokens") explicitly confirms this code path is incomplete for dynamic trees.
Recommendation: Add bounds validation before the loop—either clamp
itomax_total_draft_tokensor assertcur_layer_num_nodes <= max_total_draft_tokens + 1with supporting invariant documentation.
🧹 Nitpick comments (4)
tensorrt_llm/_torch/pyexecutor/sampler.py (1)
860-873: Top‑K per node extraction: ensure shapes and devices match.The per-node topk uses raw_logits[request_index * num_logits_per_request + i, :].indices. Verify that logits and request_index are on the same device, and that top_k_list is a CUDA tensor if raw_logits is CUDA. Otherwise, implicit device sync/copies may occur.
- Assert raw_logits.device == new_draft_tokens_cuda.device and move top_k_list to that device before use.
tests/unittest/_torch/speculative/test_eagle3.py (2)
125-131: Conditional AR test gating matches overlap support.Gating AR when attn backend != TRTLLM or chain drafter off is reasonable given overlap constraints. Consider logging skip reason to aid diagnosis in CI.
137-147: Acceptance-rate metric: handle zero drafted tokens defensively.num_drafted += max_draft_len assumes every step drafts max_draft_len. If the drafter shortens at tail, accept_rate can be inflated. Optionally, derive drafted count from actual buffered draft tokens if available to tighten the assertion.
tensorrt_llm/_torch/pyexecutor/py_executor.py (1)
2408-2451: _handle_speculative_decoding: clarify return semantics and state effects.
- The method both computes acceptance and triggers drafter overlap, and returns new_target_inputs for the next forward. Consider documenting that new_target_inputs is what ModelEngine.forward consumes.
- After calling generate_draft_tokens_with_overlap(..., num_accepted_tokens_device), you set has_previous_draft_tokens based on new_target_inputs.next_draft_tokens presence. Good. Ensure next_draft_tokens is indeed populated by the drafter before the following iteration; otherwise forward won’t overlap.
Add a brief docstring and an assert guarding type of new_target_inputs when overlap is active.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
tensorrt_llm/_torch/pyexecutor/model_engine.py(17 hunks)tensorrt_llm/_torch/pyexecutor/py_executor.py(10 hunks)tensorrt_llm/_torch/pyexecutor/sampler.py(1 hunks)tensorrt_llm/_torch/speculative/model_drafter.py(26 hunks)tensorrt_llm/scripts/install_tensorrt.sh(1 hunks)tests/unittest/_torch/speculative/test_eagle3.py(2 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{h,hpp,hh,hxx,cpp,cxx,cc,cu,cuh,py}
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
Use only spaces, no tabs; indent with 4 spaces.
Files:
tensorrt_llm/_torch/pyexecutor/sampler.pytests/unittest/_torch/speculative/test_eagle3.pytensorrt_llm/_torch/pyexecutor/py_executor.pytensorrt_llm/_torch/speculative/model_drafter.pytensorrt_llm/_torch/pyexecutor/model_engine.py
**/*.py
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
**/*.py: Python code must target Python 3.8+.
Indent Python code with 4 spaces; do not use tabs.
Maintain module namespace when importing; prefer 'from package.subpackage import foo' then 'foo.SomeClass()' instead of importing the class directly.
Python filenames should be snake_case (e.g., some_file.py).
Python classes use PascalCase names.
Functions and methods use snake_case names.
Local variables use snake_case; prefix 'k' for variables that start with a number (e.g., k_99th_percentile).
Global variables use upper SNAKE_CASE prefixed with 'G' (e.g., G_MY_GLOBAL).
Constants use upper SNAKE_CASE (e.g., MY_CONSTANT).
Avoid shadowing variables from an outer scope.
Initialize all externally visible members of a class in the constructor.
Prefer docstrings for interfaces that may be used outside a file; comments for in-function or file-local interfaces.
Use Google-style docstrings for classes and functions (Sphinx-parsable).
Document attributes and variables inline so they render under the class/function docstring.
Avoid reflection when a simpler, explicit approach suffices (e.g., avoid dict(**locals()) patterns).
In try/except, catch the most specific exceptions possible.
For duck-typing try/except, keep the try body minimal and use else for the main logic.
Files:
tensorrt_llm/_torch/pyexecutor/sampler.pytests/unittest/_torch/speculative/test_eagle3.pytensorrt_llm/_torch/pyexecutor/py_executor.pytensorrt_llm/_torch/speculative/model_drafter.pytensorrt_llm/_torch/pyexecutor/model_engine.py
**/*.{cpp,cxx,cc,h,hpp,hh,hxx,cu,cuh,py}
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
Prepend the NVIDIA Apache-2.0 copyright header with current year to the top of all source files (e.g., .cpp, .h, .cu, .py).
Files:
tensorrt_llm/_torch/pyexecutor/sampler.pytests/unittest/_torch/speculative/test_eagle3.pytensorrt_llm/_torch/pyexecutor/py_executor.pytensorrt_llm/_torch/speculative/model_drafter.pytensorrt_llm/_torch/pyexecutor/model_engine.py
🧠 Learnings (12)
📚 Learning: 2025-08-28T10:25:22.370Z
Learnt from: ixlmar
Repo: NVIDIA/TensorRT-LLM PR: 7294
File: tensorrt_llm/_torch/pyexecutor/sampler.py:887-891
Timestamp: 2025-08-28T10:25:22.370Z
Learning: In tensorrt_llm/_torch/pyexecutor/sampler.py, the draft_probs and target_probs tensors have shapes [1, steps] not [steps, vocab_size] as might be expected, making the .squeeze(0) operations appropriate for removing the batch dimension of size 1.
Applied to files:
tensorrt_llm/_torch/pyexecutor/sampler.py
📚 Learning: 2025-08-28T10:22:02.288Z
Learnt from: ixlmar
Repo: NVIDIA/TensorRT-LLM PR: 7294
File: tensorrt_llm/_torch/pyexecutor/sampler.py:1191-1197
Timestamp: 2025-08-28T10:22:02.288Z
Learning: In tensorrt_llm/_torch/pyexecutor/sampler.py, the object identity comparison `softmax_req_indices is not group_req_indices_cuda` on line ~1191 is intentional and used as an optimization to determine whether to reuse an existing indexer or create a new one, based on which code path was taken during tensor assignment.
Applied to files:
tensorrt_llm/_torch/pyexecutor/sampler.py
📚 Learning: 2025-08-18T09:08:07.687Z
Learnt from: tongyuantongyu
Repo: NVIDIA/TensorRT-LLM PR: 6984
File: cpp/tensorrt_llm/CMakeLists.txt:297-299
Timestamp: 2025-08-18T09:08:07.687Z
Learning: In the TensorRT-LLM project, artifacts are manually copied rather than installed via `cmake --install`, so INSTALL_RPATH properties are not needed - only BUILD_RPATH affects the final artifacts.
Applied to files:
tensorrt_llm/scripts/install_tensorrt.sh
📚 Learning: 2025-08-11T20:09:24.389Z
Learnt from: achartier
Repo: NVIDIA/TensorRT-LLM PR: 6763
File: tests/integration/defs/triton_server/conftest.py:16-22
Timestamp: 2025-08-11T20:09:24.389Z
Learning: In the TensorRT-LLM test infrastructure, the team prefers simple, direct solutions (like hard-coding directory traversal counts) over more complex but robust approaches when dealing with stable directory structures. They accept the maintenance cost of updating tests if the layout changes.
Applied to files:
tensorrt_llm/scripts/install_tensorrt.sh
📚 Learning: 2025-08-06T13:58:07.506Z
Learnt from: galagam
Repo: NVIDIA/TensorRT-LLM PR: 6487
File: tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py:1-12
Timestamp: 2025-08-06T13:58:07.506Z
Learning: In TensorRT-LLM, test files (files under tests/ directories) do not require NVIDIA copyright headers, unlike production source code files. Test files typically start directly with imports, docstrings, or code.
Applied to files:
tensorrt_llm/scripts/install_tensorrt.sh
📚 Learning: 2025-09-23T15:12:38.312Z
Learnt from: nv-lschneider
Repo: NVIDIA/TensorRT-LLM PR: 7910
File: cpp/tensorrt_llm/thop/allreduceOp.cpp:352-446
Timestamp: 2025-09-23T15:12:38.312Z
Learning: In TensorRT-LLM NCCL device implementation, NCCL version 2.28+ requirements are handled at runtime in the nccl_device/config layer rather than with compile-time guards. This allows the allreduceOp to remain version-agnostic and delegates version compatibility validation to the appropriate lower-level components that can gracefully handle unsupported configurations.
Applied to files:
tensorrt_llm/scripts/install_tensorrt.sh
📚 Learning: 2025-08-21T00:16:56.457Z
Learnt from: farshadghodsian
Repo: NVIDIA/TensorRT-LLM PR: 7101
File: docs/source/blogs/tech_blog/blog9_Deploying_GPT_OSS_on_TRTLLM.md:36-36
Timestamp: 2025-08-21T00:16:56.457Z
Learning: TensorRT-LLM container release tags in documentation should only reference published NGC container images. The README badge version may be ahead of the actual published container versions.
Applied to files:
tensorrt_llm/scripts/install_tensorrt.sh
📚 Learning: 2025-08-01T15:14:45.673Z
Learnt from: yibinl-nvidia
Repo: NVIDIA/TensorRT-LLM PR: 6506
File: examples/models/core/mixtral/requirements.txt:3-3
Timestamp: 2025-08-01T15:14:45.673Z
Learning: In TensorRT-LLM, examples directory can have different dependency versions than the root requirements.txt file. Version conflicts between root and examples dependencies are acceptable because examples are designed to be standalone and self-contained.
Applied to files:
tensorrt_llm/scripts/install_tensorrt.sh
📚 Learning: 2025-08-19T12:45:11.997Z
Learnt from: amitz-nv
Repo: NVIDIA/TensorRT-LLM PR: 7033
File: tensorrt_llm/_torch/pyexecutor/model_engine.py:0-0
Timestamp: 2025-08-19T12:45:11.997Z
Learning: In tensorrt_llm/_torch/pyexecutor/model_engine.py, DoRA (Delta Orthogonal Rank Adaptation) functionality was removed from the PyTorch flow to eliminate issues with inverted DoRA detection logic. The original is_dora condition was checking if scaling_vec_pointer == 0, which was potentially incorrect.
Applied to files:
tensorrt_llm/_torch/pyexecutor/py_executor.pytensorrt_llm/_torch/pyexecutor/model_engine.py
📚 Learning: 2025-08-14T21:04:50.248Z
Learnt from: thorjohnsen
Repo: NVIDIA/TensorRT-LLM PR: 6910
File: cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp:0-0
Timestamp: 2025-08-14T21:04:50.248Z
Learning: In KV cache onboarding logic during prefill in cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp, when calculating which blocks fall within the attention window, use getTokensPerBlock() to advance token indices rather than block->getUniqueTokens().size(), because the calculation needs to consider the post-prefill state where blocks will be filled to capacity, not their current token count.
Applied to files:
tensorrt_llm/_torch/pyexecutor/py_executor.py
📚 Learning: 2025-08-15T06:46:53.813Z
Learnt from: eopXD
Repo: NVIDIA/TensorRT-LLM PR: 6767
File: cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp:0-0
Timestamp: 2025-08-15T06:46:53.813Z
Learning: In the TensorRT-LLM KV cache manager, SWA (Sliding Window Attention) combined with beam search is currently in a broken/non-functional state and is planned for future rework. During preparatory refactoring phases, code related to SWA+beam search may intentionally remain in a non-working state until the broader rework is completed.
Applied to files:
tensorrt_llm/_torch/pyexecutor/py_executor.py
📚 Learning: 2025-08-14T15:43:23.107Z
Learnt from: MatthiasKohl
Repo: NVIDIA/TensorRT-LLM PR: 6904
File: tensorrt_llm/_torch/attention_backend/trtllm.py:259-262
Timestamp: 2025-08-14T15:43:23.107Z
Learning: In TensorRT-LLM's attention backend, tensor parameters in the plan() method are assigned directly without validation (dtype, device, contiguity checks). This maintains consistency across all tensor inputs and follows the pattern of trusting callers to provide correctly formatted tensors.
Applied to files:
tensorrt_llm/_torch/pyexecutor/py_executor.py
🧬 Code graph analysis (5)
tensorrt_llm/_torch/pyexecutor/sampler.py (1)
tensorrt_llm/_torch/pyexecutor/llm_request.py (1)
get_draft_token_length(777-788)
tests/unittest/_torch/speculative/test_eagle3.py (1)
tests/integration/defs/accuracy/test_disaggregated_serving.py (2)
generate_async(277-283)outputs(38-39)
tensorrt_llm/_torch/pyexecutor/py_executor.py (3)
tensorrt_llm/_torch/attention_backend/trtllm.py (1)
TrtllmAttention(1150-1590)tensorrt_llm/_torch/speculative/model_drafter.py (2)
ModelDrafter(52-948)generate_draft_tokens_with_overlap(774-880)tensorrt_llm/_torch/speculative/mtp.py (1)
SampleStateTensorsMTP(26-28)
tensorrt_llm/_torch/speculative/model_drafter.py (7)
tensorrt_llm/_torch/pyexecutor/llm_request.py (1)
LlmRequest(422-631)tensorrt_llm/_torch/pyexecutor/scheduler.py (3)
ScheduledRequests(20-41)all_requests(40-41)batch_size(37-38)tensorrt_llm/_torch/pyexecutor/sampler.py (7)
SampleStateTensors(83-88)SampleState(92-98)update_requests(121-126)update_requests(161-172)update_requests(212-231)update_requests(990-1032)update_requests(1862-1880)tensorrt_llm/_torch/pyexecutor/model_engine.py (2)
forward(77-84)forward(2502-2604)tensorrt_llm/_torch/speculative/mtp.py (4)
forward(341-538)forward(1172-1360)SampleStateTensorsMTP(26-28)update_requests(251-280)tensorrt_llm/_torch/speculative/drafting_loops.py (1)
forward(119-154)tensorrt_llm/_torch/speculative/drafter.py (1)
pad_draft_tokens_for_cuda_graph(57-69)
tensorrt_llm/_torch/pyexecutor/model_engine.py (6)
tensorrt_llm/_torch/pyexecutor/llm_request.py (4)
LlmRequest(422-631)get_draft_token_length(777-788)append(101-127)append(192-209)tensorrt_llm/_torch/speculative/eagle3.py (2)
forward(325-447)Eagle3ResourceManager(23-107)tensorrt_llm/_torch/pyexecutor/scheduler.py (2)
ScheduledRequests(20-41)batch_size(37-38)tensorrt_llm/_torch/pyexecutor/resource_manager.py (2)
ResourceManager(1287-1330)get_resource_manager(1299-1300)tensorrt_llm/_torch/pyexecutor/sampler.py (1)
SampleStateTensors(83-88)tensorrt_llm/_torch/speculative/utils.py (1)
SpecDecodingTensor(253-264)
🪛 Ruff (0.14.3)
tensorrt_llm/_torch/speculative/model_drafter.py
897-897: Avoid specifying long messages outside the exception class
(TRY003)
900-900: Avoid specifying long messages outside the exception class
(TRY003)
🪛 Shellcheck (0.11.0)
tensorrt_llm/scripts/install_tensorrt.sh
[warning] 36-36: Quote the right-hand side of != in [[ ]] to prevent glob matching.
(SC2053)
[warning] 86-86: ARCH2 appears unused. Verify use (or export if used externally).
(SC2034)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Pre-commit Check
🔇 Additional comments (5)
tensorrt_llm/_torch/pyexecutor/sampler.py (2)
280-283: Grouping condition aligns with draft-token presence.Switching to get_draft_token_length(req) > 0 correctly reflects overlap path where py_draft_logits may be unset. Looks good.
784-825: Code logic is correct; review concern is unfounded.The test file
tests/unittest/_torch/speculative/test_draft_token_tree_verification.pyvalidates this logic with concrete examples. Test cases show:
- 2 draft tokens accepted (test case 2):
cumprod().sum()= 2 →+1= 3 → loop 3 times → return3-1= 2 ✓- 3 draft tokens accepted (test case 3):
cumprod().sum()= 3 →+1= 4 → loop 4 times → return4-1= 3 ✓The
+1includes the root node in tree traversal (foradd_tokencalls), while the-1excludes it from the returned count of accepted draft tokens. This pairing is intentional and correct—the net effect returns the count of draft tokens accepted (excluding the implicit root).The tests pass as expected, confirming the off-by-one concern is incorrect.
tests/unittest/_torch/speculative/test_eagle3.py (1)
525-526: CDL sampling uses temperature=1.0, top_p=0.9.Intentional randomness increase looks fine and should avoid degenerate greedy paths.
tensorrt_llm/_torch/pyexecutor/py_executor.py (2)
276-294: Overlap scheduler gating: good guardrails with clear fallback.The overlap_not_supported check and fallback to non-overlap loop are sensible. Warning message explains the constraint.
- Ensure draft_model_engine.attn_backend is a class (not instance); issubclass(...) will raise otherwise. If it can be an instance, switch to isinstance(..., TrtllmAttention).
1066-1071: Prepare draft requests early: aligns scheduler/KV manager state.Always initializing py_draft_tokens/pages before scheduling avoids mismatches. Good change.
|
PR_Github #23514 [ run ] completed with state |
5920ecb to
589164a
Compare
|
/bot run |
|
PR_Github #23566 [ run ] triggered by Bot. Commit: |
|
PR_Github #23566 [ run ] completed with state |
589164a to
f8b4ffb
Compare
|
/bot run |
|
PR_Github #23684 [ run ] triggered by Bot. Commit: |
|
PR_Github #23684 [ run ] completed with state |
|
/bot run |
|
PR_Github #23778 [ run ] triggered by Bot. Commit: |
|
/bot run |
4f1daaf to
099fa8a
Compare
|
/bot run |
|
PR_Github #24026 [ run ] triggered by Bot. Commit: |
|
PR_Github #24027 [ run ] triggered by Bot. Commit: |
|
PR_Github #24026 [ run ] completed with state |
|
PR_Github #24027 [ run ] completed with state |
|
/bot run |
|
PR_Github #24100 [ run ] triggered by Bot. Commit: |
|
PR_Github #24100 [ run ] completed with state |
|
/bot run |
|
PR_Github #24157 [ run ] triggered by Bot. Commit: |
|
PR_Github #24157 [ run ] completed with state |
@mikeiovine Could you share the cmd for reproducing this issue with me? I can take a look at the failure. |
|
/bot run |
…ding Signed-off-by: ziyixiong-nv <[email protected]>
Signed-off-by: ziyixiong-nv <[email protected]>
099fa8a to
30fac03
Compare
|
/bot run |
|
PR_Github #24194 [ run ] triggered by Bot. Commit: |
|
PR_Github #24194 [ run ] completed with state |
|
/bot run |
|
PR_Github #24239 [ run ] triggered by Bot. Commit: |
|
PR_Github #24239 [ run ] completed with state |
|
/bot run |
|
PR_Github #24309 [ run ] triggered by Bot. Commit: |
|
PR_Github #24309 [ run ] completed with state |
Summary by CodeRabbit
Release Notes
New Features
Improvements
Description
The PR is used to enhance the overlap scheduler for two-model spec decoding.
In the previous implementation, we need to validate the draft tokens on host, so we must run the target_mode's update_requests before forwarding the draft model.
In this PR, we validate the draft tokens on GPU, and then forward the draft model directly based on the validation results.
The new workflow is as below.

Test Coverage
PR Checklist
Please review the following before submitting your PR:
PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.
PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.
Test cases are provided for new code paths (see test instructions)
Any new dependencies have been scanned for license and vulnerabilities
CODEOWNERS updated if ownership changes
Documentation updated as needed
The reviewers assigned automatically/manually are appropriate for the PR.
Please check this after reviewing the above items as appropriate for this PR.
GitHub Bot Help
/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...Provide a user friendly way for developers to interact with a Jenkins server.
Run
/bot [-h|--help]to print this help message.See details below for each supported subcommand.
run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]Launch build/test pipelines. All previously running jobs will be killed.
--reuse-test (optional)pipeline-id(OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.--disable-reuse-test(OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.--disable-fail-fast(OPTIONAL) : Disable fail fast on build/tests/infra failures.--skip-test(OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.--stage-list "A10-PyTorch-1, xxx"(OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.--gpu-type "A30, H100_PCIe"(OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.--test-backend "pytorch, cpp"(OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.--only-multi-gpu-test(OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.--disable-multi-gpu-test(OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.--add-multi-gpu-test(OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.--post-merge(OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx"(OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".--detailed-log(OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.--debug(OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in thestage-listparameter to access the appropriate container environment. Note: Does NOT update GitHub check status.For guidance on mapping tests to stage names, see
docs/source/reference/ci-overview.mdand the
scripts/test_to_stage_mapping.pyhelper.kill
killKill all running builds associated with pull request.
skip
skip --comment COMMENTSkip testing for latest commit on pull request.
--comment "Reason for skipping build/test"is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.reuse-pipeline
reuse-pipelineReuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.