Skip to content

Conversation

@raayandhar
Copy link
Contributor

@raayandhar raayandhar commented Jul 30, 2025

Summary by CodeRabbit

  • Tests
    • Updated integration tests to use llama-3.1-8b instead of TinyLlama-1.1B-Chat-v1.0.
    • Enabled previously skipped tests, allowing them to run as part of the test suite.

Description

Per this PR for disagg PP support on the PyTorch backend, we faced an issue where the ctx pp4 + gen pp4 integration test would fail.

This failure was because TinyLlama has 22 hidden layers, which is not divisible by 4. Switching this to Llama3-8B (has 32 hidden layers) should fix this. The test is registered under H200s, so the increased space needed for the model size should not be an issue.

Test Coverage

n/a

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@raayandhar
Copy link
Contributor Author

/bot run

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jul 30, 2025

📝 Walkthrough

Walkthrough

The disaggregated tests in test_disaggregated.py were updated to replace the model from TinyLlama-1.1B-Chat-v1.0 to llama-3.1-8b. Corresponding model symlink paths were updated, and unconditional skip statements were removed to enable test execution. The test list entry for 8 GPUs with H200 on Ubuntu was also updated to use llama-3.1-8b for the same test.

Changes

Cohort / File(s) Change Summary
Disaggregated Test Update
tests/integration/defs/disaggregated/test_disaggregated.py
Changed parameterized model from TinyLlama-1.1B-Chat-v1.0 to llama-3.1-8b; updated model symlink path; removed unconditional skips enabling tests to run.
Test List Adjustment
tests/integration/test_lists/test-db/l0_dgx_h200.yml
Updated test entry for test_disaggregated_ctxpp4_genpp4 to use the new model variant llama-3.1-8b.

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~2 minutes

Suggested reviewers

  • pcastonguay
  • Tabrizian
  • Shixiaowei02
  • litaotju

Note

⚡️ Unit Test Generation is now available in beta!

Learn more here, or try it out under "Finishing Touches" below.


📜 Recent review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3ebdf5b and 52728c3.

📒 Files selected for processing (1)
  • tests/integration/test_lists/test-db/l0_dgx_h200.yml (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • tests/integration/test_lists/test-db/l0_dgx_h200.yml
✨ Finishing Touches
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai or @coderabbitai title anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@coderabbitai coderabbitai bot requested review from Tabrizian and pcastonguay July 30, 2025 17:09
@tensorrt-cicd
Copy link
Collaborator

PR_Github #13562 [ run ] triggered by Bot

Copy link
Member

@Tabrizian Tabrizian left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This failure was because TinyLlama has 22 hidden layers, which is not divisible by 4.

Is it a generic limitation of PP for AGG or this limitation only exists for disaggregated serving?

Copy link
Collaborator

@pcastonguay pcastonguay left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What was the behavior with Tiny Llama? Would it just hang? If so, could we catch this issue and return a clear error message?

@raayandhar
Copy link
Contributor Author

What was the behavior with Tiny Llama? Would it just hang? If so, could we catch this issue and return a clear error message?

Just running the test it would timeout. In the actual output logs, we actually get an output message complaining about how we can't divide the layers into the stages, but I think the timeout comes faster than the error propagation. I can look into seeing if we can return a clearer error message earlier. In the non-pytest case this error returns pretty quickly.

@raayandhar
Copy link
Contributor Author

This failure was because TinyLlama has 22 hidden layers, which is not divisible by 4.

Is it a generic limitation of PP for AGG or this limitation only exists for disaggregated serving?

My understanding is that for N pipeline parallel ranks/stages, the number of layers must be divisible by N in our current implementation for the pipeline parallelism to work. So unless the agg implementation/support is more flexible where we can more freely decide how to split the model layers into sequential stages, I think this is a generic limitation of PP in our current implementation.

cc @pcastonguay to confirm?

@coderabbitai coderabbitai bot requested review from Tabrizian and pcastonguay July 30, 2025 21:00
@raayandhar
Copy link
Contributor Author

raayandhar commented Jul 30, 2025

What was the behavior with Tiny Llama? Would it just hang? If so, could we catch this issue and return a clear error message?

Just running the test it would timeout. In the actual output logs, we actually get an output message complaining about how we can't divide the layers into the stages, but I think the timeout comes faster than the error propagation. I can look into seeing if we can return a clearer error message earlier. In the non-pytest case this error returns pretty quickly.

It seems like regardless of the root cause error (unless it is pytest related, this is my experience facing a variety of different errors), we will hang and time out with Exception: Server did not become ready in time., as it keeps checking the health endpoints until it times out. The actual error is usually deep in the workers logs, and I don't see an easy/clear way for us to catch these issues and error immediately.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #13562 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #10165 completed with status: 'FAILURE'

@raayandhar
Copy link
Contributor Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #13708 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #13708 [ run ] completed with state FAILURE
/LLM/main/L0_MergeRequest_PR pipeline #10298 completed with status: 'FAILURE'

@raayandhar
Copy link
Contributor Author

/bot run --disable-fail-fast

Signed-off-by: Raayan Dhar <[email protected]>
@raayandhar
Copy link
Contributor Author

/bot kill

@raayandhar
Copy link
Contributor Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #13720 [ run ] triggered by Bot

@coderabbitai coderabbitai bot requested a review from litaotju July 31, 2025 21:11
@tensorrt-cicd
Copy link
Collaborator

PR_Github #14171 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14171 [ run ] completed with state FAILURE
/LLM/main/L0_MergeRequest_PR pipeline #10697 completed with status: 'FAILURE'

@raayandhar
Copy link
Contributor Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14174 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14174 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #10701 completed with status: 'FAILURE'

@raayandhar
Copy link
Contributor Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14182 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14182 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #10707 completed with status: 'FAILURE'

@raayandhar
Copy link
Contributor Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14195 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14195 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #10720 completed with status: 'FAILURE'

@raayandhar
Copy link
Contributor Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14313 [ run ] triggered by Bot

@raayandhar
Copy link
Contributor Author

/bot run --only-multi-gpu-test

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14326 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14313 [ run ] completed with state ABORTED
/LLM/main/L0_MergeRequest_PR pipeline #10813 completed with status: 'FAILURE'

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14326 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #10824 (Partly Tested) completed with status: 'SUCCESS'

@raayandhar
Copy link
Contributor Author

/bot run --disable-multi-gpu-test

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14357 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14357 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #10850 (Partly Tested) completed with status: 'SUCCESS'
Pipeline passed with automatic retried tests. Check the rerun report for details.

@pcastonguay
Copy link
Collaborator

/bot skip --comment "Single and multi gpu tests passed separately"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14472 [ skip ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14472 [ skip ] completed with state SUCCESS
Skipping testing for commit 6a916d9

@pcastonguay pcastonguay merged commit 4055b76 into NVIDIA:main Aug 7, 2025
4 checks passed
Shunkangz pushed a commit to hcyezhang/TensorRT-LLM that referenced this pull request Aug 8, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants