Skip to content

[Image] Refactor image build#5175

Merged
wangxiyuan merged 4 commits intovllm-project:mainfrom
Potabk:image
Dec 19, 2025
Merged

[Image] Refactor image build#5175
wangxiyuan merged 4 commits intovllm-project:mainfrom
Potabk:image

Conversation

@Potabk
Copy link
Copy Markdown
Collaborator

@Potabk Potabk commented Dec 18, 2025

What this PR does / why we need it?

In the past time, we used a hybrid architecture cross-compilation approach for image building. This method had a problem: cross-compilation performance was very poor, leading to extremely long build times(abort 4h) and even a probability of failure(see https://github.com/vllm-project/vllm-ascend/actions/runs/20152861650/job/57849208186). Therefore, I recommend using a separate architecture build followed by manifest merging, which significantly reduces image build time(20min).

Does this PR introduce any user-facing change?

How was this patch tested?

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Note

Gemini is unable to generate a review for this pull request due to the file types involved not being currently supported.

Signed-off-by: wangli <wangli858794774@gmail.com>
@github-actions
Copy link
Copy Markdown
Contributor

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

@wangxiyuan wangxiyuan added ready read for review ready-for-test start test by label for PR labels Dec 18, 2025
Signed-off-by: wangli <wangli858794774@gmail.com>
@Potabk Potabk removed ready read for review ready-for-test start test by label for PR labels Dec 19, 2025
Signed-off-by: wangli <wangli858794774@gmail.com>
@Potabk Potabk added ready read for review ready-for-test start test by label for PR and removed ready read for review ready-for-test start test by label for PR labels Dec 19, 2025
@Potabk
Copy link
Copy Markdown
Collaborator Author

Potabk commented Dec 19, 2025

@Potabk Potabk requested review from Yikun and wangxiyuan December 19, 2025 03:16
@Potabk
Copy link
Copy Markdown
Collaborator Author

Potabk commented Dec 19, 2025

alse cc @Yikun

Signed-off-by: wangli <wangli858794774@gmail.com>
@wangxiyuan wangxiyuan merged commit a6eaf81 into vllm-project:main Dec 19, 2025
28 checks passed
@Potabk Potabk deleted the image branch December 19, 2025 06:42
@Potabk Potabk mentioned this pull request Dec 19, 2025
845473182 pushed a commit to 845473182/vllm-ascend that referenced this pull request Dec 19, 2025
…to eplb_refactor

* 'main' of https://github.com/vllm-project/vllm-ascend: (52 commits)
  [Doc]Add the user_guide doc file regarding fine-grained TP. (vllm-project#5084)
  [pref] qwen3_next add triton ops : fused_sigmoid_gating_delta_rule_update (vllm-project#4818)
  [Feature] Add token mask for DispatchGmmCombineDecode operator (vllm-project#5171)
  [CI] Improve CI (vllm-project#5078)
  [Refactor] remove some metadata variables in attention_v1. (vllm-project#5160)
  Add Qwen3-VL-235B-A22B-Instruct tutorials (vllm-project#5167)
  [Doc] Add a perf tune section (vllm-project#5127)
  [Image] Refactor image build (vllm-project#5175)
  [refactor] refactor weight trans nz and transpose (vllm-project#4878)
  [BugFix]Fix precision issue for LoRA feature (vllm-project#4141)
  【Doc】Deepseekv3.1/R1 doc enhancement (vllm-project#4827)
  support basic long_seq feature st (vllm-project#5140)
  [Bugfix] install trition for test_custom_op (vllm-project#5112)
  [2/N][Pangu][MoE] Remove Pangu Related Code (vllm-project#5130)
  [bugfix] Use FUSED_MC2 MoE comm path for the op `dispatch_ffn_combine` (vllm-project#5156)
  [BugFix] Fix top_p,top_k issue with EAGLE and add top_p,top_k in EAGLE e2e (vllm-project#5131)
  [Doc][P/D] Fix MooncakeConnector's name (vllm-project#5172)
  [Bugfix] Fix in_profile_run in mtp_proposer dummy_run (vllm-project#5165)
  [Doc] Refact benchmark doc (vllm-project#5173)
  [Nightly]  Avoid max_model_len being smaller than the decoder prompt to prevent single-node-accuray-tests from failing (vllm-project#5174)
  ...

Signed-off-by: 白永斌 <baiyongbin3@h-partners.com>
wangxiyuan pushed a commit that referenced this pull request Dec 19, 2025
### What this PR does / why we need it?
Some tiny bugfix for
#5175

Signed-off-by: wangli <wangli858794774@gmail.com>
chenaoxuan pushed a commit to chenaoxuan/vllm-ascend that referenced this pull request Dec 20, 2025
### What this PR does / why we need it?

In the past time, we used a hybrid architecture cross-compilation
approach for image building. This method had a problem:
cross-compilation performance was very poor, leading to extremely long
build times(abort 4h) and even a probability of failure(see
https://github.com/vllm-project/vllm-ascend/actions/runs/20152861650/job/57849208186).
Therefore, I recommend using a separate architecture build followed by
manifest merging, which significantly reduces image build time(20min).

- vLLM version: v0.12.0
- vLLM main:
vllm-project/vllm@ad32e3e

---------

Signed-off-by: wangli <wangli858794774@gmail.com>
chenaoxuan pushed a commit to chenaoxuan/vllm-ascend that referenced this pull request Dec 20, 2025
### What this PR does / why we need it?
Some tiny bugfix for
vllm-project#5175

Signed-off-by: wangli <wangli858794774@gmail.com>
ZRJ026 pushed a commit to ZRJ026/vllm-ascend that referenced this pull request Feb 28, 2026
### What this PR does / why we need it?

In the past time, we used a hybrid architecture cross-compilation
approach for image building. This method had a problem:
cross-compilation performance was very poor, leading to extremely long
build times(abort 4h) and even a probability of failure(see
https://github.com/vllm-project/vllm-ascend/actions/runs/20152861650/job/57849208186).
Therefore, I recommend using a separate architecture build followed by
manifest merging, which significantly reduces image build time(20min).

- vLLM version: v0.12.0
- vLLM main:
vllm-project/vllm@ad32e3e

---------

Signed-off-by: wangli <wangli858794774@gmail.com>
Signed-off-by: zrj026 <zhangrunjiang026@gmail.com>
ZRJ026 pushed a commit to ZRJ026/vllm-ascend that referenced this pull request Feb 28, 2026
### What this PR does / why we need it?
Some tiny bugfix for
vllm-project#5175

Signed-off-by: wangli <wangli858794774@gmail.com>
Signed-off-by: zrj026 <zhangrunjiang026@gmail.com>
ZRJ026 pushed a commit to ZRJ026/vllm-ascend that referenced this pull request Mar 4, 2026
### What this PR does / why we need it?

In the past time, we used a hybrid architecture cross-compilation
approach for image building. This method had a problem:
cross-compilation performance was very poor, leading to extremely long
build times(abort 4h) and even a probability of failure(see
https://github.com/vllm-project/vllm-ascend/actions/runs/20152861650/job/57849208186).
Therefore, I recommend using a separate architecture build followed by
manifest merging, which significantly reduces image build time(20min).

- vLLM version: v0.12.0
- vLLM main:
vllm-project/vllm@ad32e3e

---------

Signed-off-by: wangli <wangli858794774@gmail.com>
Signed-off-by: zrj026 <zhangrunjiang026@gmail.com>
ZRJ026 pushed a commit to ZRJ026/vllm-ascend that referenced this pull request Mar 4, 2026
### What this PR does / why we need it?
Some tiny bugfix for
vllm-project#5175

Signed-off-by: wangli <wangli858794774@gmail.com>
Signed-off-by: zrj026 <zhangrunjiang026@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ci/build ready read for review ready-for-test start test by label for PR

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants