Skip to content

[Refactor]310p_e2e test case update#6539

Merged
wangxiyuan merged 14 commits intovllm-project:mainfrom
pu-zhe:310p_test
Feb 7, 2026
Merged

[Refactor]310p_e2e test case update#6539
wangxiyuan merged 14 commits intovllm-project:mainfrom
pu-zhe:310p_test

Conversation

@pu-zhe
Copy link
Copy Markdown
Contributor

@pu-zhe pu-zhe commented Feb 4, 2026

What this PR does / why we need it?

This pull request significantly enhances the test suite by adding new end-to-end test cases for Qwen3 models on the 310P hardware platform. The primary goal is to ensure the stability and correctness of these models under diverse operational conditions, including various parallelism strategies, data types, and quantization methods.

Does this PR introduce any user-facing change?

No

How was this patch tested?

E2E test

Signed-off-by: pu-zhe <zpuaa@outlook.com>
@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Feb 4, 2026

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @pu-zhe, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the test suite by adding new end-to-end test cases for Qwen3 models on the 310P hardware platform. The primary goal is to ensure the stability and correctness of these models under diverse operational conditions, including various parallelism strategies, data types, and quantization methods. These additions are crucial for maintaining the quality and reliability of the vLLM-ascend project's support for Qwen3 models.

Highlights

  • New End-to-End Tests for Qwen3 Models: This pull request introduces a series of new end-to-end test cases specifically designed for Qwen3 models on the 310P platform.
  • Comprehensive Test Coverage: The added tests cover various configurations, including dense and Mixture-of-Experts (MoE) Qwen3 models, different tensor parallelism sizes (1, 2, and 4), FP16 data type, and W8A8 quantization with Ascend-specific settings.
  • Single and Multi-Card Scenarios: New test files are organized to validate both single-card and multi-card (tensor parallelism) execution environments for Qwen3 models.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • tests/e2e/310p/multicard/test_qwen3_dense.py
    • Added new test file.
    • Introduced test_qwen3_dense_tp2_fp16 for Qwen3-8B with 2-way tensor parallelism and FP16.
    • Introduced test_qwen3_dense_tp4_w8a8 for Qwen3-32B-W8A8 with 4-way tensor parallelism, FP16, and Ascend quantization.
  • tests/e2e/310p/multicard/test_qwen3_moe.py
    • Added new test file.
    • Introduced test_qwen3_moe_tp4_fp16 for Qwen3-30B-A3B (MoE) with 4-way tensor parallelism and FP16.
    • Introduced test_qwen3_moe_ep4_fp16 for Qwen3-30B-A3B (MoE) with 4-way tensor parallelism, FP16, and expert parallelism enabled.
  • tests/e2e/310p/singlecard/test_qwen3_dense.py
    • Added new test file.
    • Introduced test_qwen3_dense_tp1_fp16 for Qwen3-8B with single-card (TP1) and FP16.
    • Introduced test_qwen3_dense_tp1_w8a8 for Qwen3-8B-W8A8 with single-card (TP1), FP16, and Ascend quantization.
Ignored Files
  • Ignored by pattern: .github/workflows/** (1)
    • .github/workflows/_e2e_test.yaml
Activity
  • The pull request was opened by pu-zhe.
  • No review comments or approvals have been recorded yet.
  • No specific progress updates or discussions have been noted.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Signed-off-by: pu-zhe <zpuaa@outlook.com>
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds end-to-end tests for Qwen3 models on 310p devices. The new tests are a good addition. However, there is significant code duplication within each new test file. I've provided suggestions to refactor the tests using pytest.mark.parametrize. This will reduce duplication, improve readability, and make the tests more maintainable.

Comment thread tests/e2e/310p/multicard/test_qwen3_dense.py Outdated
Comment thread tests/e2e/310p/multicard/test_qwen3_moe.py Outdated
Comment thread tests/e2e/310p/singlecard/test_qwen3_dense.py Outdated
pu-zhe added 12 commits February 4, 2026 16:06
Signed-off-by: pu-zhe <zpuaa@outlook.com>
Signed-off-by: pu-zhe <zpuaa@outlook.com>
Signed-off-by: pu-zhe <zpuaa@outlook.com>
Signed-off-by: pu-zhe <zpuaa@outlook.com>
Signed-off-by: pu-zhe <zpuaa@outlook.com>
Signed-off-by: pu-zhe <zpuaa@outlook.com>
Signed-off-by: pu-zhe <zpuaa@outlook.com>
Signed-off-by: pu-zhe <zpuaa@outlook.com>
Signed-off-by: pu-zhe <zpuaa@outlook.com>
Signed-off-by: pu-zhe <zpuaa@outlook.com>
Signed-off-by: pu-zhe <zpuaa@outlook.com>
@wangxiyuan wangxiyuan merged commit 1cc2257 into vllm-project:main Feb 7, 2026
25 checks passed
@pu-zhe pu-zhe deleted the 310p_test branch February 7, 2026 10:22
845473182 pushed a commit to 845473182/vllm-ascend that referenced this pull request Feb 9, 2026
…to qwen3next_rebase

* 'main' of https://github.com/vllm-project/vllm-ascend:
  [Patch] Remove the patch of MiniCPM (vllm-project#5975)
  [P/D] layerwise connector support recompute scheduler (vllm-project#5900)
  [CI] Add workflow support for lint image build (vllm-project#6489)
  [Bugfix] Fix problematic dummy_run & improper input_batch_size in eagle (vllm-project#6517)
  [Refactor]310p_e2e test case update (vllm-project#6539)
  [Refactor]refactor p2p connector (vllm-project#6551)
  [Refactor]refactor 310p attention impl and add ut (vllm-project#6579)
  [Refactor]refactor 310p ops and add ut (vllm-project#6591)
  [Ops][Refactor] Remove custom rotary_embedding operator (vllm-project#6523)
  [Lint]Style: Convert `vllm-ascend/` to ruff format(new Batch vllm-project#8) (vllm-project#6604)
  [Test] Add initial multi modal cases of Qwen2.5-VL-7B-Instruct for disaggregated encoder  (vllm-project#5301)
  [CI] Fix broken CI (vllm-project#6599)
  [Lint]Style: Convert `vllm-ascend/` to ruff format(Batch vllm-project#10) (vllm-project#6173)
  [Lint]Style: Convert `vllm-ascend/` to ruff format(Batch vllm-project#11) (vllm-project#6176)
  [Lint]Style: Convert `vllm-ascend/` to ruff format(Batch vllm-project#8) (vllm-project#6129)
  [Lint]Style: Convert `vllm-ascend/` to ruff format(Batch vllm-project#7) (vllm-project#6023)
  [CI][Misc] Some improvement for github action (vllm-project#6587)
  [Image] Bump mooncake version to v0.3.8.post1 (vllm-project#6428)
chenchuw886 pushed a commit to chenchuw886/vllm-ascend that referenced this pull request Feb 12, 2026
### What this PR does / why we need it?
This pull request significantly enhances the test suite by adding new
end-to-end test cases for Qwen3 models on the 310P hardware platform.
The primary goal is to ensure the stability and correctness of these
models under diverse operational conditions, including various
parallelism strategies, data types, and quantization methods.
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
E2E test
- vLLM version: v0.15.0
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.15.0

---------

Signed-off-by: pu-zhe <zpuaa@outlook.com>
Signed-off-by: momochenchuw <chenchuw@huawei.com>
@wangxiyuan wangxiyuan mentioned this pull request Feb 24, 2026
ZRJ026 pushed a commit to ZRJ026/vllm-ascend that referenced this pull request Feb 28, 2026
### What this PR does / why we need it?
This pull request significantly enhances the test suite by adding new
end-to-end test cases for Qwen3 models on the 310P hardware platform.
The primary goal is to ensure the stability and correctness of these
models under diverse operational conditions, including various
parallelism strategies, data types, and quantization methods.
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
E2E test
- vLLM version: v0.15.0
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.15.0

---------

Signed-off-by: pu-zhe <zpuaa@outlook.com>
Signed-off-by: zrj026 <zhangrunjiang026@gmail.com>
maoxx241 pushed a commit to maoxx241/vllm-ascend that referenced this pull request Mar 2, 2026
### What this PR does / why we need it?
This pull request significantly enhances the test suite by adding new
end-to-end test cases for Qwen3 models on the 310P hardware platform.
The primary goal is to ensure the stability and correctness of these
models under diverse operational conditions, including various
parallelism strategies, data types, and quantization methods.
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
E2E test
- vLLM version: v0.15.0
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.15.0

---------

Signed-off-by: pu-zhe <zpuaa@outlook.com>
ZRJ026 pushed a commit to ZRJ026/vllm-ascend that referenced this pull request Mar 4, 2026
### What this PR does / why we need it?
This pull request significantly enhances the test suite by adding new
end-to-end test cases for Qwen3 models on the 310P hardware platform.
The primary goal is to ensure the stability and correctness of these
models under diverse operational conditions, including various
parallelism strategies, data types, and quantization methods.
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
E2E test
- vLLM version: v0.15.0
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.15.0

---------

Signed-off-by: pu-zhe <zpuaa@outlook.com>
Signed-off-by: zrj026 <zhangrunjiang026@gmail.com>
LCAIZJ pushed a commit to LCAIZJ/vllm-ascend that referenced this pull request Mar 7, 2026
### What this PR does / why we need it?
This pull request significantly enhances the test suite by adding new
end-to-end test cases for Qwen3 models on the 310P hardware platform.
The primary goal is to ensure the stability and correctness of these
models under diverse operational conditions, including various
parallelism strategies, data types, and quantization methods.
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
E2E test
- vLLM version: v0.15.0
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.15.0

---------

Signed-off-by: pu-zhe <zpuaa@outlook.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants