Skip to content

Conversation

@dcampora
Copy link
Collaborator

@dcampora dcampora commented Sep 24, 2025

Summary by CodeRabbit

  • Refactor
    • Streamlined token handling during sampling to use a unified, higher-level recording path, improving consistency and maintainability.
    • No changes to user-facing behavior, outputs, or configuration.

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@dcampora dcampora requested a review from a team as a code owner September 24, 2025 10:29
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Sep 24, 2025

📝 Walkthrough

Walkthrough

Replaces direct writes to the new_tokens buffer during rejection-sampling handling with request.add_new_token calls. Adjusts the sample_last branch to use add_new_token when applicable, otherwise falls back to add_token with step indexing. Stop criteria checks and beam logic remain unchanged.

Changes

Cohort / File(s) Summary of Changes
PyExecutor Sampler: token recording refactor
tensorrt_llm/_torch/pyexecutor/sampler.py
Replaced in-place writes to new_tokens with request.add_new_token(...) during draft token acceptance and in sample_last branch; introduced conditional path to call add_token(request, new_tokens, beam=self.BEAM, step=num_accepted) when not sampling last token; preserved stop checks and beam handling.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant Sampler
  participant Request
  participant StopCriteria as Stop Criteria

  rect rgb(245,248,255)
    Sampler->>Request: add_new_token(new_token, beam)
    Request-->>Sampler: ack
    Sampler->>StopCriteria: check after each token
    StopCriteria-->>Sampler: continue / stop
  end

  alt sample_last == true
    Sampler->>Request: add_new_token(new_token, beam)
    Request-->>Sampler: ack
  else sample_last == false
    Note over Sampler: Use legacy buffer path
    Sampler->>Sampler: add_token(request, new_tokens, beam, step=num_accepted)
  end
Loading

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Pre-merge checks and finishing touches

❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
Description Check ⚠️ Warning The PR body contains only the repository template text with the Description and Test Coverage sections left empty and no explanation of the code changes, rationale, or validation steps; while the template is present, required reviewer-facing details are missing so the submission is insufficient for review. This omission prevents verifying that the change addresses the intended issue and that adequate tests exist. Therefore the PR description does not meet the repository's required template completeness. Please update the PR body to include a concise Description explaining what was changed and why, populate Test Coverage with the specific tests or test plans that validate the change and expected outcomes, and confirm whether there are any API/behavioral impacts; also ensure the PR title follows the required bracketed format and complete the PR checklist (including CODEOWNERS if ownership changed), or state explicitly if no additional tests or ownership updates are needed and link any related issue or ticket.
✅ Passed checks (1 passed)
Check name Status Explanation
Title Check ✅ Passed The title "[None][fix] Fix access to new tokens in sampler." is concise, follows the repository's required prefix format, and accurately summarizes the primary change (a bug fix in how the sampler accesses/new-token handling), so it communicates the main intent to reviewers.
✨ Finishing touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
tensorrt_llm/_torch/pyexecutor/sampler.py (1)

895-901: Type hint drift: new_tokens is a nested list here, not a torch.Tensor

process_draft_tokens_rejection_sampling (and the greedy variant) consume new_tokens coming from state.host.new_tokens.tolist(), i.e., list[list[list[int]]], but the signature says torch.Tensor. Adjust the annotation to avoid confusion and catch misuse.

Example:

-def _process_draft_tokens_rejection_sampling(
-        self, request: LlmRequest, new_tokens: torch.Tensor) -> int:
+def _process_draft_tokens_rejection_sampling(
+        self, request: LlmRequest, new_tokens: list[list[list[int]]]) -> int:

And similarly for _process_draft_tokens_greedy.

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b8bfa63 and 5760646.

📒 Files selected for processing (1)
  • tensorrt_llm/_torch/pyexecutor/sampler.py (1 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{h,hpp,hh,hxx,cpp,cxx,cc,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Use only spaces, no tabs; indent with 4 spaces.

Files:

  • tensorrt_llm/_torch/pyexecutor/sampler.py
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Python code must target Python 3.8+.
Indent Python code with 4 spaces; do not use tabs.
Maintain module namespace when importing; prefer 'from package.subpackage import foo' then 'foo.SomeClass()' instead of importing the class directly.
Python filenames should be snake_case (e.g., some_file.py).
Python classes use PascalCase names.
Functions and methods use snake_case names.
Local variables use snake_case; prefix 'k' for variables that start with a number (e.g., k_99th_percentile).
Global variables use upper SNAKE_CASE prefixed with 'G' (e.g., G_MY_GLOBAL).
Constants use upper SNAKE_CASE (e.g., MY_CONSTANT).
Avoid shadowing variables from an outer scope.
Initialize all externally visible members of a class in the constructor.
Prefer docstrings for interfaces that may be used outside a file; comments for in-function or file-local interfaces.
Use Google-style docstrings for classes and functions (Sphinx-parsable).
Document attributes and variables inline so they render under the class/function docstring.
Avoid reflection when a simpler, explicit approach suffices (e.g., avoid dict(**locals()) patterns).
In try/except, catch the most specific exceptions possible.
For duck-typing try/except, keep the try body minimal and use else for the main logic.

Files:

  • tensorrt_llm/_torch/pyexecutor/sampler.py
**/*.{cpp,cxx,cc,h,hpp,hh,hxx,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Prepend the NVIDIA Apache-2.0 copyright header with current year to the top of all source files (e.g., .cpp, .h, .cu, .py).

Files:

  • tensorrt_llm/_torch/pyexecutor/sampler.py
🧠 Learnings (2)
📓 Common learnings
Learnt from: eopXD
PR: NVIDIA/TensorRT-LLM#6767
File: cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp:0-0
Timestamp: 2025-08-15T06:46:54.897Z
Learning: In cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp addToken function, newly allocated blocks are unshared by design. The beam search path in addToken (when sequence.getNumTokens() > windowSize) is currently broken/non-functional with SWA, so the block allocation doesn't follow a shared-then-unshared pattern.
📚 Learning: 2025-08-15T06:46:54.897Z
Learnt from: eopXD
PR: NVIDIA/TensorRT-LLM#6767
File: cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp:0-0
Timestamp: 2025-08-15T06:46:54.897Z
Learning: In cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp addToken function, newly allocated blocks are unshared by design. The beam search path in addToken (when sequence.getNumTokens() > windowSize) is currently broken/non-functional with SWA, so the block allocation doesn't follow a shared-then-unshared pattern.

Applied to files:

  • tensorrt_llm/_torch/pyexecutor/sampler.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (2)
tensorrt_llm/_torch/pyexecutor/sampler.py (2)

914-919: Good fix: use request.add_new_token for accepted drafts

Switching from in-place writes into new_tokens to request.add_new_token(new_token, beam) is the right abstraction and avoids issues with inference-mode tensors and state desync on the request object.


922-930: LGTM on sample_last vs. fallback path

Using request.add_new_token for the sampled-last token and falling back to add_token(..., step=num_accepted) when all drafts are accepted is correct and keeps request state consistent.

@kris1025 kris1025 requested a review from mikeiovine September 24, 2025 11:18
@dcampora dcampora enabled auto-merge (squash) September 24, 2025 11:25
@dcampora
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #19802 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #19802 [ run ] completed with state FAILURE
/LLM/main/L0_MergeRequest_PR pipeline #14898 completed with status: 'FAILURE'

@dcampora
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #19812 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #19812 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #14907 completed with status: 'FAILURE'

Signed-off-by: Daniel Campora <[email protected]>
Signed-off-by: Daniel Campora <[email protected]>
Signed-off-by: Daniel Campora <[email protected]>
Signed-off-by: Daniel Campora <[email protected]>
@dcampora dcampora force-pushed the user/dcampora/fix_sampler_draft_bug branch from 7ff6dc8 to 7cafec5 Compare September 30, 2025 07:24
@dcampora
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20343 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20343 [ run ] completed with state FAILURE
/LLM/main/L0_MergeRequest_PR pipeline #15347 completed with status: 'FAILURE'

@kris1025
Copy link
Collaborator

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20361 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20361 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #15360 completed with status: 'FAILURE'

@dcampora
Copy link
Collaborator Author

dcampora commented Oct 2, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20525 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20525 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #15483 completed with status: 'FAILURE'

@dcampora
Copy link
Collaborator Author

dcampora commented Oct 2, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20545 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20545 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #15502 completed with status: 'SUCCESS'

@dcampora dcampora merged commit ab433b7 into NVIDIA:main Oct 2, 2025
5 checks passed
evezhier pushed a commit to evezhier/TensorRT-LLM that referenced this pull request Oct 3, 2025
faradawn pushed a commit to faradawn/TensorRT-LLM that referenced this pull request Oct 3, 2025
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Nov 1, 2025
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Nov 3, 2025
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Nov 3, 2025
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Nov 3, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants