Skip to content

[CI]fix nightly multi node test error for wait for pod ready#6675

Merged
wangxiyuan merged 1 commit intovllm-project:mainfrom
MrZ20:fix_nightly
Feb 11, 2026
Merged

[CI]fix nightly multi node test error for wait for pod ready#6675
wangxiyuan merged 1 commit intovllm-project:mainfrom
MrZ20:fix_nightly

Conversation

@MrZ20
Copy link
Copy Markdown
Contributor

@MrZ20 MrZ20 commented Feb 11, 2026

What this PR does / why we need it?

Fixes the issue where nightly multi-node tests hang during the "wait for pod ready" stage due to strict shell mode.

issue: https://github.com/vllm-project/vllm-ascend/actions/runs/21874130621/job/63137883914

bug:
截屏2026-02-11 11 16 44_副本

Does this PR introduce any user-facing change?

How was this patch tested?

Signed-off-by: MrZ20 <2609716663@qq.com>
@MrZ20 MrZ20 requested a review from wangxiyuan as a code owner February 11, 2026 03:19
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @MrZ20, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a critical CI failure in nightly multi-node tests by modifying a shell script. The change prevents the test script from prematurely exiting during environment setup, which was causing tests to hang at the 'wait for pod ready' stage. This ensures the stability and reliability of the multi-node test suite.

Highlights

  • CI Fix: Temporarily disabled strict shell mode (set -eu) around the source /usr/local/Ascend/nnal/atb/set_env.sh command in the multi-node test script.
  • Test Stability: Resolved an issue where nightly multi-node tests were hanging during the 'wait for pod ready' stage due to the strict shell mode causing premature script termination.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • tests/e2e/nightly/multi_node/scripts/run.sh
    • Modified the script to temporarily disable set -eu before sourcing /usr/local/Ascend/nnal/atb/set_env.sh and re-enabling it afterwards.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@github-actions
Copy link
Copy Markdown
Contributor

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses a hang in the nightly multi-node tests by temporarily disabling strict shell mode (set -eu) when sourcing a vendor environment script. The change is correct and effectively resolves the issue. I have provided one suggestion to improve the robustness of the script by ensuring it fails explicitly if the environment file is missing.

Additionally, as per the repository's style guide, I've suggested an improved pull request title and a more detailed summary below.

Suggested PR Title:

[CI][BugFix] Fix nightly multi-node test hang

Suggested PR Summary:

### What this PR does / why we need it?

This PR fixes an issue where the nightly multi-node test hangs. The hang is caused by the `run.sh` script exiting prematurely when sourcing `/usr/local/Ascend/nnal/atb/set_env.sh`. This environment script is not compatible with the strict shell mode (`set -eu`) enabled at the beginning of `run.sh`.

The fix involves temporarily disabling strict mode (`set +eu`) before sourcing the problematic script and re-enabling it immediately after. This allows the setup to complete without errors, preventing the test from hanging.

### Does this PR introduce _any_ user-facing change?

No. This is a CI fix and does not affect end-users.

### How was this patch tested?

The fix is validated by observing that the nightly multi-node test pipeline now completes successfully without hanging.

Comment on lines +18 to +20
set +eu
source /usr/local/Ascend/nnal/atb/set_env.sh
set -eu
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

While disabling strict mode with set +eu prevents the script from exiting due to issues within set_env.sh, it also suppresses errors if the source command itself fails (e.g., if the file doesn't exist). This could lead to the script continuing with an incomplete environment, causing non-obvious failures later.

To make this more robust, I suggest checking for the existence of the file before attempting to source it. This ensures that the script fails fast if the environment script is missing.

Suggested change
set +eu
source /usr/local/Ascend/nnal/atb/set_env.sh
set -eu
if [ -f "/usr/local/Ascend/nnal/atb/set_env.sh" ]; then
set +eu
source /usr/local/Ascend/nnal/atb/set_env.sh
set -eu
else
echo "ERROR: /usr/local/Ascend/nnal/atb/set_env.sh not found!" >&2
exit 1
fi

@wangxiyuan wangxiyuan merged commit 6bc44bf into vllm-project:main Feb 11, 2026
22 checks passed
mikequan0425 pushed a commit to taoyao1221/vllm-ascend that referenced this pull request Feb 11, 2026
…oject#6675)

### What this PR does / why we need it?
Fixes the issue where nightly multi-node tests hang during the "wait for
pod ready" stage due to strict shell mode.

### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

- vLLM version: v0.15.0
- vLLM main:
vllm-project/vllm@1339784

Signed-off-by: MrZ20 <2609716663@qq.com>
Signed-off-by: mikequan0425 <mikequan0425@foxmail.com>
845473182 pushed a commit to 845473182/vllm-ascend that referenced this pull request Feb 12, 2026
…to qwen3next_rebase

* 'main' of https://github.com/vllm-project/vllm-ascend:
  [Docs] Fix GLM-5 deploy command (vllm-project#6711)
  [npugraph_ex]enable npugraph_ex by default (vllm-project#6664)
  [doc]add GLM5.md (vllm-project#6709)
  [Model] GLM5 adaptation (vllm-project#6642)
  [Bugfix] Update target probs to target logits in rejection sample (vllm-project#6685)
  [Main][Ops] Make triton rope support index_selecting from cos_sin_cache (vllm-project#5450)
  [CI]fix nightly multi node test error for wait for pod ready (vllm-project#6675)
  [main  to main] upgrade main 0210 (vllm-project#6673)
  [main][Quant] Remove unused rotation functions and parameters from W4A4 LAOS quantization (vllm-project#6648)
  [Test][BugFix] Fix torch.rand usage in triton penalty test (vllm-project#6680)
  Add Worker Interface:check_health (vllm-project#6681)
chenchuw886 pushed a commit to chenchuw886/vllm-ascend that referenced this pull request Feb 12, 2026
…oject#6675)

### What this PR does / why we need it?
Fixes the issue where nightly multi-node tests hang during the "wait for
pod ready" stage due to strict shell mode.

### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

- vLLM version: v0.15.0
- vLLM main:
vllm-project/vllm@1339784

Signed-off-by: MrZ20 <2609716663@qq.com>
Signed-off-by: momochenchuw <chenchuw@huawei.com>
@MrZ20 MrZ20 deleted the fix_nightly branch February 24, 2026 01:37
banxiaduhuo pushed a commit to banxiaduhuo/vllm-ascend that referenced this pull request Feb 26, 2026
…oject#6675)

### What this PR does / why we need it?
Fixes the issue where nightly multi-node tests hang during the "wait for
pod ready" stage due to strict shell mode.

### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

- vLLM version: v0.15.0
- vLLM main:
vllm-project/vllm@1339784

Signed-off-by: MrZ20 <2609716663@qq.com>
ZRJ026 pushed a commit to ZRJ026/vllm-ascend that referenced this pull request Feb 28, 2026
…oject#6675)

### What this PR does / why we need it?
Fixes the issue where nightly multi-node tests hang during the "wait for
pod ready" stage due to strict shell mode.

### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

- vLLM version: v0.15.0
- vLLM main:
vllm-project/vllm@1339784

Signed-off-by: MrZ20 <2609716663@qq.com>
Signed-off-by: zrj026 <zhangrunjiang026@gmail.com>
maoxx241 pushed a commit to maoxx241/vllm-ascend that referenced this pull request Mar 2, 2026
…oject#6675)

### What this PR does / why we need it?
Fixes the issue where nightly multi-node tests hang during the "wait for
pod ready" stage due to strict shell mode.

### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

- vLLM version: v0.15.0
- vLLM main:
vllm-project/vllm@1339784

Signed-off-by: MrZ20 <2609716663@qq.com>
ZRJ026 pushed a commit to ZRJ026/vllm-ascend that referenced this pull request Mar 4, 2026
…oject#6675)

### What this PR does / why we need it?
Fixes the issue where nightly multi-node tests hang during the "wait for
pod ready" stage due to strict shell mode.

### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

- vLLM version: v0.15.0
- vLLM main:
vllm-project/vllm@1339784

Signed-off-by: MrZ20 <2609716663@qq.com>
Signed-off-by: zrj026 <zhangrunjiang026@gmail.com>
LCAIZJ pushed a commit to LCAIZJ/vllm-ascend that referenced this pull request Mar 7, 2026
…oject#6675)

### What this PR does / why we need it?
Fixes the issue where nightly multi-node tests hang during the "wait for
pod ready" stage due to strict shell mode.

### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

- vLLM version: v0.15.0
- vLLM main:
vllm-project/vllm@1339784

Signed-off-by: MrZ20 <2609716663@qq.com>
yangzhe-2026 pushed a commit to yangzhe-2026/vllm-ascend that referenced this pull request May 6, 2026
…oject#6675)

### What this PR does / why we need it?
Fixes the issue where nightly multi-node tests hang during the "wait for
pod ready" stage due to strict shell mode.

### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

- vLLM version: v0.15.0
- vLLM main:
vllm-project/vllm@1339784

Signed-off-by: MrZ20 <2609716663@qq.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants