Skip to content

support basic long_seq feature st#5140

Merged
wangxiyuan merged 11 commits intovllm-project:mainfrom
LookAround0301:ST
Dec 19, 2025
Merged

support basic long_seq feature st#5140
wangxiyuan merged 11 commits intovllm-project:mainfrom
LookAround0301:ST

Conversation

@LookAround0301
Copy link
Copy Markdown
Contributor

@LookAround0301 LookAround0301 commented Dec 17, 2025

What this PR does / why we need it?

support basic long_seq feature st

Does this PR introduce any user-facing change?

How was this patch tested?

Signed-off-by: LookAround <lixushi@huawei.com>
Signed-off-by: LookAround <lixushi@huawei.com>
@github-actions
Copy link
Copy Markdown
Contributor

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces end-to-end smoke tests for a long sequence feature using PCP/DCP on multi-card setups. The tests cover eager mode, full graph compilation, and piecewise execution. My review focuses on improving code quality and maintainability. I've identified a misleading docstring and significant code duplication across the test functions. I've provided suggestions to correct the docstring and refactor the tests to eliminate redundancy, making them easier to maintain in the future.

Comment on lines +19 to +22
"""Compare the short outputs of HF and vLLM when using greedy sampling.

Run `pytest tests/e2e/multicard/test_qwen3_moe.py`.
"""
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The file's docstring contains an incorrect command to run the tests. It refers to test_qwen3_moe.py instead of the current file, test_long_sequence_basic.py. This is likely a copy-paste error and can be confusing for other developers.

Suggested change
"""Compare the short outputs of HF and vLLM when using greedy sampling.
Run `pytest tests/e2e/multicard/test_qwen3_moe.py`.
"""
"""Compare the short outputs of HF and vLLM when using greedy sampling.
Run `pytest tests/e2e/multicard/long_sequence/test_long_sequence_basic.py`.
"""

Comment on lines +28 to +141
decode_context_parallel_size=2,
max_num_batched_tokens=1024,
enable_expert_parallel=True,
block_size=128
) as runner:
runner.model.generate(prompts, sampling_params)

model = "vllm-ascend/Qwen3-30B-A3B-W8A8"
with VllmRunner(
model,
enforce_eager=True,
max_model_len=1024,
tensor_parallel_size=8,
prefill_context_parallel_size=2,
decode_context_parallel_size=2,
enable_expert_parallel=True,
block_size=128,
quantization="ascend",
) as runner:
runner.model.generate(prompts, sampling_params)


def test_pcp_dcp_full_graph():
prompts = [
"The capital of France is",
"Hello, my name is Tom, I am",
"The president of United States is",
"AI future is"
]
model = "deepseek-ai/DeepSeek-V2-Lite-Chat"
sampling_params = SamplingParams(max_tokens=32, temperature=0.0)
with VllmRunner(
model,
enforce_eager=False,
max_model_len=1024,
tensor_parallel_size=2,
prefill_context_parallel_size=2,
decode_context_parallel_size=2,
max_num_batched_tokens=1024,
enable_expert_parallel=True,
block_size=128,
compilation_config={
"cudagraph_mode": "FULL_DECODE_ONLY",
"cudagraph_capture_sizes": [4, 8, 24, 48, 60]}
) as runner:
runner.model.generate(prompts, sampling_params)

model = "vllm-ascend/Qwen3-30B-A3B-W8A8"
with VllmRunner(
model,
enforce_eager=False,
max_model_len=1024,
tensor_parallel_size=8,
prefill_context_parallel_size=2,
decode_context_parallel_size=2,
enable_expert_parallel=True,
block_size=128,
quantization="ascend",
compilation_config={
"cudagraph_mode": "FULL_DECODE_ONLY",
"cudagraph_capture_sizes": [4, 8, 24, 48, 60]}
) as runner:
runner.model.generate(prompts, sampling_params)


def test_pcp_dcp_piece_wise():
prompts = [
"The capital of France is",
"Hello, my name is Tom, I am",
"The president of United States is",
"AI future is"
]
model = "deepseek-ai/DeepSeek-V2-Lite-Chat"
sampling_params = SamplingParams(max_tokens=32, temperature=0.0)
with VllmRunner(
model,
enforce_eager=False,
max_model_len=1024,
tensor_parallel_size=2,
prefill_context_parallel_size=2,
decode_context_parallel_size=2,
max_num_batched_tokens=1024,
enable_expert_parallel=True,
block_size=128
) as runner:
runner.model.generate(prompts, sampling_params)

model = "vllm-ascend/Qwen3-30B-A3B-W8A8"
with VllmRunner(
model,
enforce_eager=False,
max_model_len=1024,
tensor_parallel_size=8,
prefill_context_parallel_size=2,
decode_context_parallel_size=2,
enable_expert_parallel=True,
block_size=128,
quantization="ascend"
) as runner:
runner.model.generate(prompts, sampling_params)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The three test functions (test_pcp_dcp_basic, test_pcp_dcp_full_graph, test_pcp_dcp_piece_wise) are highly repetitive, making the code difficult to maintain. Key components like prompts, sampling_params, and the VllmRunner configuration are duplicated in each function.

A better approach is to refactor this using pytest.mark.parametrize. This will create a single, parameterized test, eliminating redundancy and making the test configurations explicit and easier to manage. The suggested code implements this refactoring. Please note that import pytest is included in the suggestion and should be moved to the top of the file.

import pytest


PROMPTS = [
    "The capital of France is",
    "Hello, my name is Tom, I am",
    "The president of United States is",
    "AI future is"
]
SAMPLING_PARAMS = SamplingParams(max_tokens=32, temperature=0.0)
DEEPSEEK_MODEL = "deepseek-ai/DeepSeek-V2-Lite-Chat"
QWEN_MODEL = "vllm-ascend/Qwen3-30B-A3B-W8A8"

BASE_DEEPSEEK_ARGS = {
    "max_model_len": 1024,
    "tensor_parallel_size": 2,
    "prefill_context_parallel_size": 2,
    "decode_context_parallel_size": 2,
    "max_num_batched_tokens": 1024,
    "enable_expert_parallel": True,
    "block_size": 128
}
BASE_QWEN_ARGS = {
    "max_model_len": 1024,
    "tensor_parallel_size": 8,
    "prefill_context_parallel_size": 2,
    "decode_context_parallel_size": 2,
    "enable_expert_parallel": True,
    "block_size": 128,
    "quantization": "ascend",
}

def _run_models(deepseek_vllm_runner_args, qwen_vllm_runner_args):
    with VllmRunner(DEEPSEEK_MODEL, **deepseek_vllm_runner_args) as runner:
        runner.model.generate(PROMPTS, SAMPLING_PARAMS)
    
    with VllmRunner(QWEN_MODEL, **qwen_vllm_runner_args) as runner:
        runner.model.generate(PROMPTS, SAMPLING_PARAMS)

@pytest.mark.parametrize("extra_args", [
    {"enforce_eager": True},
    {
        "enforce_eager": False,
        "compilation_config": {
            "cudagraph_mode": "FULL_DECODE_ONLY",
            "cudagraph_capture_sizes": [4, 8, 24, 48, 60]
        },
    },
    {"enforce_eager": False},
], ids=["basic", "full_graph", "piece_wise"])
def test_pcp_dcp(extra_args):
    deepseek_args = {**BASE_DEEPSEEK_ARGS, **extra_args}
    qwen_args = {**BASE_QWEN_ARGS, **extra_args}
    _run_models(deepseek_args, qwen_args)

Signed-off-by: LookAround <lixushi@huawei.com>
Signed-off-by: LookAround <lixushi@huawei.com>
Signed-off-by: LookAround <lixushi@huawei.com>
Signed-off-by: LookAround <lixushi@huawei.com>
@weijinqian0 weijinqian0 added ready read for review ready-for-test start test by label for PR labels Dec 18, 2025
Signed-off-by: LookAround <lixushi@huawei.com>
Signed-off-by: LookAround <lixushi@huawei.com>
Signed-off-by: LookAround <lixushi@huawei.com>
@wangxiyuan wangxiyuan merged commit 76e58d6 into vllm-project:main Dec 19, 2025
22 checks passed
845473182 pushed a commit to 845473182/vllm-ascend that referenced this pull request Dec 19, 2025
…to eplb_refactor

* 'main' of https://github.com/vllm-project/vllm-ascend: (52 commits)
  [Doc]Add the user_guide doc file regarding fine-grained TP. (vllm-project#5084)
  [pref] qwen3_next add triton ops : fused_sigmoid_gating_delta_rule_update (vllm-project#4818)
  [Feature] Add token mask for DispatchGmmCombineDecode operator (vllm-project#5171)
  [CI] Improve CI (vllm-project#5078)
  [Refactor] remove some metadata variables in attention_v1. (vllm-project#5160)
  Add Qwen3-VL-235B-A22B-Instruct tutorials (vllm-project#5167)
  [Doc] Add a perf tune section (vllm-project#5127)
  [Image] Refactor image build (vllm-project#5175)
  [refactor] refactor weight trans nz and transpose (vllm-project#4878)
  [BugFix]Fix precision issue for LoRA feature (vllm-project#4141)
  【Doc】Deepseekv3.1/R1 doc enhancement (vllm-project#4827)
  support basic long_seq feature st (vllm-project#5140)
  [Bugfix] install trition for test_custom_op (vllm-project#5112)
  [2/N][Pangu][MoE] Remove Pangu Related Code (vllm-project#5130)
  [bugfix] Use FUSED_MC2 MoE comm path for the op `dispatch_ffn_combine` (vllm-project#5156)
  [BugFix] Fix top_p,top_k issue with EAGLE and add top_p,top_k in EAGLE e2e (vllm-project#5131)
  [Doc][P/D] Fix MooncakeConnector's name (vllm-project#5172)
  [Bugfix] Fix in_profile_run in mtp_proposer dummy_run (vllm-project#5165)
  [Doc] Refact benchmark doc (vllm-project#5173)
  [Nightly]  Avoid max_model_len being smaller than the decoder prompt to prevent single-node-accuray-tests from failing (vllm-project#5174)
  ...

Signed-off-by: 白永斌 <baiyongbin3@h-partners.com>
chenaoxuan pushed a commit to chenaoxuan/vllm-ascend that referenced this pull request Dec 20, 2025
### What this PR does / why we need it?
support basic long_seq feature st 

- vLLM version: v0.12.0
- vLLM main:
vllm-project/vllm@ad32e3e

---------

Signed-off-by: LookAround <lixushi@huawei.com>
@LookAround0301 LookAround0301 deleted the ST branch January 4, 2026 06:33
ZRJ026 pushed a commit to ZRJ026/vllm-ascend that referenced this pull request Feb 28, 2026
### What this PR does / why we need it?
support basic long_seq feature st

- vLLM version: v0.12.0
- vLLM main:
vllm-project/vllm@ad32e3e

---------

Signed-off-by: LookAround <lixushi@huawei.com>
Signed-off-by: zrj026 <zhangrunjiang026@gmail.com>
ZRJ026 pushed a commit to ZRJ026/vllm-ascend that referenced this pull request Mar 4, 2026
### What this PR does / why we need it?
support basic long_seq feature st

- vLLM version: v0.12.0
- vLLM main:
vllm-project/vllm@ad32e3e

---------

Signed-off-by: LookAround <lixushi@huawei.com>
Signed-off-by: zrj026 <zhangrunjiang026@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

module:tests ready read for review ready-for-test start test by label for PR

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants