Skip to content

[bugfix]fix file not found error in nightly of single-node#6976

Merged
wangxiyuan merged 3 commits intovllm-project:mainfrom
MrZ20:fix_nightly_build
Mar 4, 2026
Merged

[bugfix]fix file not found error in nightly of single-node#6976
wangxiyuan merged 3 commits intovllm-project:mainfrom
MrZ20:fix_nightly_build

Conversation

@MrZ20
Copy link
Copy Markdown
Contributor

@MrZ20 MrZ20 commented Mar 4, 2026

What this PR does / why we need it?

  1. The main image build takes approximately two hours. The main image build time needs to be moved forward to 21pm(UTC+8) to ensure that the nightly image build can use the latest main image.
schedule:
   # UTC+8: 8am, 12pm, 16pm, 22pm
   - cron: '0 0,4,8,14 * * *'

--->

schedule:
   # UTC+8: 8am, 12pm, 16pm, 21pm
   - cron: '0 0,4,8,13 * * *'

Link: https://github.com/vllm-project/vllm-ascend/actions/runs/22632712302/job/65641055135#step:8:26

  1. The nightly test is encountering the following error:
ImportError: ascend_transport.so: cannot open shared object file: No such file or directory.

Path need to be added:

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib" >> ~/.bashrc

Link: https://github.com/vllm-project/vllm-ascend/actions/runs/22632712302/job/65641054911#step:7:529

Does this PR introduce any user-facing change?

How was this patch tested?

MrZ20 added 2 commits March 4, 2026 10:12
Signed-off-by: MrZ20 <2609716663@qq.com>
Signed-off-by: MrZ20 <2609716663@qq.com>
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request resolves a runtime 'file not found' error by consistently adding '/usr/local/lib' to the LD_LIBRARY_PATH environment variable across various Docker build configurations and within the end-to-end testing setup. This ensures that necessary shared libraries are discoverable, particularly in single-node nightly environments, improving the stability and reliability of the builds.

Highlights

  • Bugfix for 'file not found' errors: This pull request addresses a 'file not found' error that was occurring in single-node nightly builds, likely due to missing library paths.
  • LD_LIBRARY_PATH updates in Dockerfiles: All relevant Dockerfiles have been updated to explicitly include '/usr/local/lib' in the LD_LIBRARY_PATH environment variable, ensuring that dynamically linked libraries are correctly located at runtime within the Docker containers.
  • Test configuration adjustment: The 'tests/e2e/conftest.py' file was modified to prepend '/usr/local/lib' to the 'mooncake_ld_path', aligning the test environment with the Docker environment changes.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • Dockerfile
    • Added a command to export '/usr/local/lib' to the LD_LIBRARY_PATH environment variable.
  • Dockerfile.310p
    • Added a command to export '/usr/local/lib' to the LD_LIBRARY_PATH environment variable.
  • Dockerfile.310p.openEuler
    • Added a command to export '/usr/local/lib' to the LD_LIBRARY_PATH environment variable.
  • Dockerfile.a3
    • Added a command to export '/usr/local/lib' to the LD_LIBRARY_PATH environment variable.
  • Dockerfile.a3.openEuler
    • Added a command to export '/usr/local/lib' to the LD_LIBRARY_PATH environment variable.
  • Dockerfile.openEuler
    • Added a command to export '/usr/local/lib' to the LD_LIBRARY_PATH environment variable.
  • tests/e2e/conftest.py
    • Modified the 'mooncake_ld_path' variable to prepend '/usr/local/lib' to the existing path.
Ignored Files
  • Ignored by pattern: .github/workflows/** (3)
    • .github/workflows/_e2e_nightly_single_node.yaml
    • .github/workflows/_e2e_nightly_single_node_models.yaml
    • .github/workflows/schedule_image_build_and_push.yaml
Activity
  • The pull request description is empty, so no specific activity or context was provided by the author beyond the title.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Mar 4, 2026

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request aims to fix a "file not found" error by adding /usr/local/lib to LD_LIBRARY_PATH. While the change in tests/e2e/conftest.py is appropriate for the test environment, the approach in the Dockerfiles can be improved for robustness. Modifying ~/.bashrc only affects interactive shells and is not a reliable way to set environment variables for all processes. I've suggested using ldconfig to configure the dynamic linker, which is the standard practice and ensures the library path is available system-wide. The provided repository style guide concerns pull request summaries and titles, and does not contain coding standards, so it was not referenced in the code review comments.

Comment thread Dockerfile
python3 -m pip cache purge

RUN echo "export LD_PRELOAD=/usr/lib/$(uname -m)-linux-gnu/libjemalloc.so.2:$LD_PRELOAD" >> ~/.bashrc
RUN echo "export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib" >> ~/.bashrc
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Modifying ~/.bashrc to set LD_LIBRARY_PATH is not a robust solution as it's only sourced for interactive shells. If the container's CMD is changed to run a non-interactive process, this environment variable will not be set, leading to runtime errors. A more reliable method is to configure the system's dynamic linker search paths.

RUN echo "/usr/local/lib" > /etc/ld.so.conf.d/mooncake.conf && ldconfig

Comment thread Dockerfile.310p
python3 -m pip cache purge

RUN echo "export LD_PRELOAD=/usr/lib/$(uname -m)-linux-gnu/libjemalloc.so.2:$LD_PRELOAD" >> ~/.bashrc
RUN echo "export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib" >> ~/.bashrc
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Modifying ~/.bashrc to set LD_LIBRARY_PATH is not a robust solution as it's only sourced for interactive shells. If the container's CMD is changed to run a non-interactive process, this environment variable will not be set, leading to runtime errors. A more reliable method is to configure the system's dynamic linker search paths.

RUN echo "/usr/local/lib" > /etc/ld.so.conf.d/mooncake.conf && ldconfig

Comment thread Dockerfile.310p.openEuler
python3 -m pip cache purge

RUN echo "export LD_PRELOAD=/usr/lib64/libjemalloc.so.2:$LD_PRELOAD" >> ~/.bashrc
RUN echo "export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib" >> ~/.bashrc
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Modifying ~/.bashrc to set LD_LIBRARY_PATH is not a robust solution as it's only sourced for interactive shells. If the container's CMD is changed to run a non-interactive process, this environment variable will not be set, leading to runtime errors. A more reliable method is to configure the system's dynamic linker search paths.

RUN echo "/usr/local/lib" > /etc/ld.so.conf.d/mooncake.conf && ldconfig

Comment thread Dockerfile.a3
python3 -m pip cache purge

RUN echo "export LD_PRELOAD=/usr/lib/$(uname -m)-linux-gnu/libjemalloc.so.2:$LD_PRELOAD" >> ~/.bashrc
RUN echo "export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib" >> ~/.bashrc
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Modifying ~/.bashrc to set LD_LIBRARY_PATH is not a robust solution as it's only sourced for interactive shells. If the container's CMD is changed to run a non-interactive process, this environment variable will not be set, leading to runtime errors. A more reliable method is to configure the system's dynamic linker search paths.

RUN echo "/usr/local/lib" > /etc/ld.so.conf.d/mooncake.conf && ldconfig

Comment thread Dockerfile.a3.openEuler
python3 -m pip cache purge

RUN echo "export LD_PRELOAD=/usr/lib64/libjemalloc.so.2:$LD_PRELOAD" >> ~/.bashrc
RUN echo "export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib" >> ~/.bashrc
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Modifying ~/.bashrc to set LD_LIBRARY_PATH is not a robust solution as it's only sourced for interactive shells. If the container's CMD is changed to run a non-interactive process, this environment variable will not be set, leading to runtime errors. A more reliable method is to configure the system's dynamic linker search paths.

RUN echo "/usr/local/lib" > /etc/ld.so.conf.d/mooncake.conf && ldconfig

Comment thread Dockerfile.openEuler
python3 -m pip cache purge

RUN echo "export LD_PRELOAD=/usr/lib64/libjemalloc.so.2:$LD_PRELOAD" >> ~/.bashrc
RUN echo "export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib" >> ~/.bashrc
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Modifying ~/.bashrc to set LD_LIBRARY_PATH is not a robust solution as it's only sourced for interactive shells. If the container's CMD is changed to run a non-interactive process, this environment variable will not be set, leading to runtime errors. A more reliable method is to configure the system's dynamic linker search paths.

RUN echo "/usr/local/lib" > /etc/ld.so.conf.d/mooncake.conf && ldconfig

Signed-off-by: MrZ20 <2609716663@qq.com>
@MrZ20 MrZ20 marked this pull request as ready for review March 4, 2026 03:36
@MrZ20 MrZ20 requested review from Yikun and wangxiyuan as code owners March 4, 2026 03:36
@wangxiyuan wangxiyuan merged commit 95b44d7 into vllm-project:main Mar 4, 2026
11 checks passed
@MrZ20 MrZ20 deleted the fix_nightly_build branch March 4, 2026 06:05
845473182 pushed a commit to 845473182/vllm-ascend that referenced this pull request Mar 5, 2026
…to qwen3next_graph

* 'main' of https://github.com/vllm-project/vllm-ascend: (40 commits)
  [Feature] Add docs of batch invariance and make some extra operators patch (vllm-project#6910)
  [bugfix]Qwen2.5VL accurate question (vllm-project#6975)
  [CI] Add DeepSeek-V3.2 large EP nightly ci (vllm-project#6378)
  [Ops][BugFix] Fix RoPE shape mismatch for mtp models with flashcomm v1 enabled (vllm-project#6939)
  [bugfix]fix file not found error in nightly of single-node (vllm-project#6976)
  [Bugfix] Fix the acceptance rates dorp issue when applying eagle3 to QuaRot model (vllm-project#6914)
  [CI] Enable auto upgrade e2e estimated time for auto-partition suites (vllm-project#6840)
  [Doc][Misc] Fix msprobe_guide.md documentation issues (vllm-project#6965)
  [Nightly][Refactor]Migrate nightly single-node model tests from `.py` to `.yaml` (vllm-project#6503)
  [BugFix] Improve GDN layer detection for multimodal models (vllm-project#6941)
  [feat]ds3.2 pcp support mtp and chunkprefill (vllm-project#6917)
  [CPU binding] Implement global CPU slicing and improve IRQ binding for Ascend NPUs (vllm-project#6945)
  [Triton] Centralize Ascend extension op dispatch in triton_utils (vllm-project#6937)
  [csrc][bugfix] Add compile-time Ascend950/910_95 compatibility for custom ops between CANN8.5 and 9.0 (vllm-project#6936)
  [300I][Bugfix] fix unquant model weight nd2nz error (vllm-project#6851)
  [doc] fix supported_models (vllm-project#6930)
  [CI] nightly test timeout (vllm-project#6912)
  [CI] Upgrade CANN to 8.5.1 (vllm-project#6897)
  [Model]Add Qwen3-Omni quantization Ascend NPU adaptation and optimization (vllm-project#6828)
  [P/D][v0.16.0]Adapt to RecomputeScheduler in vLLM 0.16.0 (vllm-project#6898)
  ...
LCAIZJ pushed a commit to LCAIZJ/vllm-ascend that referenced this pull request Mar 7, 2026
…ect#6976)

### What this PR does / why we need it?
1. The **main image build** takes approximately **two hours**. The main
image build time needs to be moved forward to **21pm(UTC+8)** to ensure
that the nightly image build can use the latest main image.
``` bash
schedule:
   # UTC+8: 8am, 12pm, 16pm, 22pm
   - cron: '0 0,4,8,14 * * *'
```
--->
``` bash
schedule:
   # UTC+8: 8am, 12pm, 16pm, 21pm
   - cron: '0 0,4,8,13 * * *'
```
Link:
https://github.com/vllm-project/vllm-ascend/actions/runs/22632712302/job/65641055135#step:8:26

2. The nightly test is encountering the following error: 
``` bash
ImportError: ascend_transport.so: cannot open shared object file: No such file or directory.
```
Path need to be added:
``` bash
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib" >> ~/.bashrc
```
Link:
https://github.com/vllm-project/vllm-ascend/actions/runs/22632712302/job/65641054911#step:7:529
### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

- vLLM version: v0.16.0
- vLLM main:
vllm-project/vllm@15d76f7

---------

Signed-off-by: MrZ20 <2609716663@qq.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants