Skip to content

[Qwen3TTS][Feat] Code2Wav batched decoding#1426

Merged
hsliuustc0106 merged 16 commits intovllm-project:mainfrom
JuanPZuluaga:feat/code2wav-batched-decode
Feb 24, 2026
Merged

[Qwen3TTS][Feat] Code2Wav batched decoding#1426
hsliuustc0106 merged 16 commits intovllm-project:mainfrom
JuanPZuluaga:feat/code2wav-batched-decode

Conversation

@JuanPZuluaga
Copy link
Copy Markdown
Contributor

@JuanPZuluaga JuanPZuluaga commented Feb 21, 2026

PLEASE FILL IN THE PR DESCRIPTION HERE ENSURING ALL CHECKLIST ITEMS (AT THE BOTTOM) HAVE BEEN CONSIDERED.

Purpose

In this simple PR, we add support for Code2Wav batched decoding. Note that in the stage configs for the offline benchmarking: stage-0 BS >= stage-1 BS. Otherwise, stage-1 wouldn't reach the number of requests that are specified by the batch size (probably in async scheduling wouldn't be a problem?). Other mods:

  • added batched decoding in benchmark (can be removed if we don't want it, I can also refactor it a bit to support larger batches).

Test Plan

Test Result

  1. I added a benchmark prompts txt.
  2. batch size for stage 0 is always 4.
  3. run evaluation with bs=1 and bs=4 for stage-1
  4. compared results.

Baseline with 12 samples:

# First batch
INFO 02-21 22:29:02 [stats.py:500] [Overall Summary]
INFO 02-21 22:29:02 [stats.py:500] +-----------------------------+-----------+
INFO 02-21 22:29:02 [stats.py:500] | Field                       |     Value |
INFO 02-21 22:29:02 [stats.py:500] +-----------------------------+-----------+
INFO 02-21 22:29:02 [stats.py:500] | e2e_requests                |         4 |
INFO 02-21 22:29:02 [stats.py:500] | e2e_wall_time_ms            | 5,036.018 |
INFO 02-21 22:29:02 [stats.py:500] | e2e_total_tokens            |       321 |
INFO 02-21 22:29:02 [stats.py:500] | e2e_avg_time_per_request_ms | 1,259.005 |
INFO 02-21 22:29:02 [stats.py:500] | e2e_avg_tokens_per_s        |    63.741 |
INFO 02-21 22:29:02 [stats.py:500] | e2e_stage_0_wall_time_ms    | 4,406.654 |
INFO 02-21 22:29:02 [stats.py:500] | e2e_stage_1_wall_time_ms    |   630.117 |
INFO 02-21 22:29:02 [stats.py:500] +-----------------------------+-----------+

# Second batch
INFO 02-21 22:29:08 [stats.py:500] [Overall Summary]
INFO 02-21 22:29:08 [stats.py:500] +-----------------------------+-----------+
INFO 02-21 22:29:08 [stats.py:500] | Field                       |     Value |
INFO 02-21 22:29:08 [stats.py:500] +-----------------------------+-----------+
INFO 02-21 22:29:08 [stats.py:500] | e2e_requests                |         4 |
INFO 02-21 22:29:08 [stats.py:500] | e2e_wall_time_ms            | 5,340.400 |
INFO 02-21 22:29:08 [stats.py:500] | e2e_total_tokens            |       424 |
INFO 02-21 22:29:08 [stats.py:500] | e2e_avg_time_per_request_ms | 1,335.100 |
INFO 02-21 22:29:08 [stats.py:500] | e2e_avg_tokens_per_s        |    79.395 |
INFO 02-21 22:29:08 [stats.py:500] | e2e_stage_0_wall_time_ms    | 5,067.353 |
INFO 02-21 22:29:08 [stats.py:500] | e2e_stage_1_wall_time_ms    |   274.454 |
INFO 02-21 22:29:08 [stats.py:500] +-----------------------------+-----------+

# Third batch
INFO 02-21 22:29:12 [stats.py:500] [Overall Summary]
INFO 02-21 22:29:12 [stats.py:500] +-----------------------------+-----------+
INFO 02-21 22:29:12 [stats.py:500] | Field                       |     Value |
INFO 02-21 22:29:12 [stats.py:500] +-----------------------------+-----------+
INFO 02-21 22:29:12 [stats.py:500] | e2e_requests                |         4 |
INFO 02-21 22:29:12 [stats.py:500] | e2e_wall_time_ms            | 4,708.225 |
INFO 02-21 22:29:12 [stats.py:500] | e2e_total_tokens            |       408 |
INFO 02-21 22:29:12 [stats.py:500] | e2e_avg_time_per_request_ms | 1,177.056 |
INFO 02-21 22:29:12 [stats.py:500] | e2e_avg_tokens_per_s        |    86.657 |
INFO 02-21 22:29:12 [stats.py:500] | e2e_stage_0_wall_time_ms    | 4,435.627 |
INFO 02-21 22:29:12 [stats.py:500] | e2e_stage_1_wall_time_ms    |   273.613 |
INFO 02-21 22:29:12 [stats.py:500] +-----------------------------+-----------+

Stage 1 batching with the same 12 samples:

# First batch
INFO 02-21 22:31:51 [stats.py:500] [Overall Summary]
INFO 02-21 22:31:51 [stats.py:500] +-----------------------------+-----------+
INFO 02-21 22:31:51 [stats.py:500] | Field                       |     Value |
INFO 02-21 22:31:51 [stats.py:500] +-----------------------------+-----------+
INFO 02-21 22:31:51 [stats.py:500] | e2e_requests                |         4 |
INFO 02-21 22:31:51 [stats.py:500] | e2e_wall_time_ms            | 5,345.297 |
INFO 02-21 22:31:51 [stats.py:500] | e2e_total_tokens            |       333 |
INFO 02-21 22:31:51 [stats.py:500] | e2e_avg_time_per_request_ms | 1,336.324 |
INFO 02-21 22:31:51 [stats.py:500] | e2e_avg_tokens_per_s        |    62.298 |
INFO 02-21 22:31:51 [stats.py:500] | e2e_stage_0_wall_time_ms    | 4,767.987 |
INFO 02-21 22:31:51 [stats.py:500] | e2e_stage_1_wall_time_ms    |   578.176 |
INFO 02-21 22:31:51 [stats.py:500] +-----------------------------+-----------+

# Second batch
INFO 02-21 22:31:56 [stats.py:500] [Overall Summary]
INFO 02-21 22:31:56 [stats.py:500] +-----------------------------+-----------+
INFO 02-21 22:31:56 [stats.py:500] | Field                       |     Value |
INFO 02-21 22:31:56 [stats.py:500] +-----------------------------+-----------+
INFO 02-21 22:31:56 [stats.py:500] | e2e_requests                |         4 |
INFO 02-21 22:31:56 [stats.py:500] | e2e_wall_time_ms            | 5,120.655 |
INFO 02-21 22:31:56 [stats.py:500] | e2e_total_tokens            |       425 |
INFO 02-21 22:31:56 [stats.py:500] | e2e_avg_time_per_request_ms | 1,280.164 |
INFO 02-21 22:31:56 [stats.py:500] | e2e_avg_tokens_per_s        |    82.997 |
INFO 02-21 22:31:56 [stats.py:500] | e2e_stage_0_wall_time_ms    | 4,899.338 |
INFO 02-21 22:31:56 [stats.py:500] | e2e_stage_1_wall_time_ms    |   222.618 |
INFO 02-21 22:31:56 [stats.py:500] +-----------------------------+-----------+

# Third batch
INFO 02-21 22:32:01 [stats.py:500] [Overall Summary]
INFO 02-21 22:32:01 [stats.py:500] +-----------------------------+-----------+
INFO 02-21 22:32:01 [stats.py:500] | Field                       |     Value |
INFO 02-21 22:32:01 [stats.py:500] +-----------------------------+-----------+
INFO 02-21 22:32:01 [stats.py:500] | e2e_requests                |         4 |
INFO 02-21 22:32:01 [stats.py:500] | e2e_wall_time_ms            | 5,365.096 |
INFO 02-21 22:32:01 [stats.py:500] | e2e_total_tokens            |       432 |
INFO 02-21 22:32:01 [stats.py:500] | e2e_avg_time_per_request_ms | 1,341.274 |
INFO 02-21 22:32:01 [stats.py:500] | e2e_avg_tokens_per_s        |    80.520 |
INFO 02-21 22:32:01 [stats.py:500] | e2e_stage_0_wall_time_ms    | 5,139.006 |
INFO 02-21 22:32:01 [stats.py:500] | e2e_stage_1_wall_time_ms    |   227.250 |
INFO 02-21 22:32:01 [stats.py:500] +-----------------------------+-----------+

One can see the speed up in: e2e_stage_1_wall_time_ms between the batched and the baseline.

The benchmark results clearly show the improvement:

Baseline stage-1 times: 630ms, 274ms, 273ms
Batched stage-1 times: 578ms, 222ms, 227ms

The command to run the batched decoding:

python3 examples/offline_inference/qwen3_tts/end2end.py --query-type CustomVoice --txt-prompts examples/offline_inference/qwen3_tts/benchmark_prompts.txt --batch-size 4 --stage-configs-path vllm_omni/model_executor/stage_configs/qwen3_tts_batch.yaml --output-dir benchmark_output_final --log-stats

(if one want to run the baseline, only needs to change the stage_configs).


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan. Please providing the test scripts & test commands. Please state the reasons if your codes don't require additional test scripts. For test file guidelines, please check the test style doc
  • The test results. Please pasting the results comparison before and after, or e2e results.
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model. Please run mkdocs serve to sync the documentation editions to ./docs.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft.

BEFORE SUBMITTING, PLEASE READ https://github.com/vllm-project/vllm-omni/blob/main/CONTRIBUTING.md (anything written below this line will be removed by GitHub Actions)

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: b877279efd

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment on lines +123 to +127
slices = get_forward_context().ubatch_slices
if slices is not None and len(slices) > 1:
boundaries = [0]
for s in slices:
n = s if isinstance(s, int) else (s.token_slice.stop - s.token_slice.start)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Split inputs by per-request lengths before batched decode

This split logic uses forward_context.ubatch_slices, but in GPU generation those slices come from maybe_create_ubatch_slices(...) and can represent microbatch partitions rather than request boundaries when ubatching is enabled. In that mode, one request can be split into multiple pseudo-requests here, so model_outputs/sr lengths no longer match input_batch.num_reqs, which then fails in gpu_generation_model_runner.sample_tokens (length check for dict list outputs) or misroutes audio to requests.

Useful? React with 👍 / 👎.

Comment on lines +270 to +274
additional_information = {
"task_type": [args.query_type],
"text": [text],
"language": ["Auto"],
"speaker": ["Vivian"],
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Keep Base cloning metadata when overriding prompts from file

When --txt-prompts is set, this branch rebuilds additional_information with only generic text/speaker fields for every query_type. If users run --query-type Base, the talker preprocessing path requires Base-specific clone inputs like ref_audio (and often ref_text), so this new path raises runtime validation errors instead of generating audio.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown
Collaborator

@lishunyang12 lishunyang12 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few comments on the batched decode path and edge cases.

if slices is not None and len(slices) > 1:
boundaries = [0]
for s in slices:
n = s if isinstance(s, int) else (s.token_slice.stop - s.token_slice.start)
Copy link
Copy Markdown
Collaborator

@lishunyang12 lishunyang12 Feb 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Codex bot flagged this too — ubatch_slices may not map 1:1 to request boundaries under microbatching. Worth verifying this doesn't split a single request across slices.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed in the last commit. now, we check for "token_slice"

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, the token_slice guard makes sense.

"sr": [sr_tensor] * num_req,
},
)

Copy link
Copy Markdown
Collaborator

@lishunyang12 lishunyang12 Feb 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does SpeechTokenizer.decode() officially support a list of dicts? The old code passed a single dict — if the upstream API hasn't changed, this could silently produce wrong results.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indeed, it officially supports it, see here

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Got it, thanks for the link.

cmax = int(codes_fq.max().item())
head = codes_fq[: min(2, total_frames), : min(8, q)].cpu().tolist()
c = valid_codes[0]["audio_codes"]
logger.info(
Copy link
Copy Markdown
Collaborator

@lishunyang12 lishunyang12 Feb 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If tok.decode(valid_codes) returns fewer waveforms than expected, wavs[j] will IndexError. An assert len(wavs) == len(valid_codes) would help.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks.

for output in stage_outputs.request_output:
request_id = output.request_id
audio_data = output.outputs[0].multimodal_output["audio"]
# async_chunk mode returns a list of chunks; concatenate them.
Copy link
Copy Markdown
Collaborator

@lishunyang12 lishunyang12 Feb 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any reason not to always pass a list here? The conditional unwrapping means omni.generate() gets different types depending on batch size.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

--batch-size 4 \
--stage-configs-path vllm_omni/model_executor/stage_configs/qwen3_tts_batch.yaml
```

Copy link
Copy Markdown
Collaborator

@lishunyang12 lishunyang12 Feb 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since --batch-size must match a CUDA graph capture size, a runtime power-of-two check would save users from a cryptic CUDA graph error.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

@hsliuustc0106
Copy link
Copy Markdown
Collaborator

@vllm-omni-reviewer

@github-actions
Copy link
Copy Markdown

🤖 VLLM-Omni PR Review

Code Review: [Qwen3TTS][Feat] Code2Wav batched decoding

1. Overview

This PR adds support for batched decoding in the Code2Wav stage of Qwen3TTS, enabling multiple audio generation requests to be processed in a single forward pass through the SpeechTokenizer. The changes include:

  • Core batching logic in qwen3_tts_code2wav.py
  • Updated end2end.py example with batch processing support
  • New stage config file for batched processing
  • Benchmark prompts file for testing
  • Documentation updates in README

Overall Assessment: Positive. The implementation is well-structured and the benchmark results demonstrate meaningful performance improvements in stage-1 wall time.


2. Code Quality

Strengths

  • Clean separation of concerns with the _split_request_ids helper method
  • Proper handling of edge cases (empty inputs, invalid lengths, context trimming)
  • Good use of existing vLLM infrastructure (ubatch_slices) for batch handling

Concerns

Import reordering noise: Multiple files have import statements reordered (moving from vllm import ... after other imports). While not incorrect, this adds significant noise to the diff. These should ideally be in a separate commit or PR.

Silent warning removal: In the original code, when context trim >= decoded length, a warning was logged. In the batched version, this silently continues without any indication:

# Original (lines 179-186):
if cut < audio_np.shape[0]:
    audio_np = audio_np[cut:]
else:
    logger.warning(
        "Context trim %d >= decoded length %d; returning empty audio.",
        cut,
        audio_np.shape[0],
    )
    return empty_ret

# New (lines 218-220):
if cut < audio_np.shape[0]:
    audio_np = audio_np[cut:]
else:
    continue  # context trim >= decoded length

Consider adding a debug-level log for this case.

Type annotation inconsistency: The forward method signature shows tuple[torch.Tensor, torch.Tensor] as return type hint but actually returns OmniOutput:

def forward(
    self,
    input_ids: torch.Tensor | None = None,
    ...
) -> OmniOutput:  # Good - signature is correct

3. Architecture & Design

Good Patterns

  • The _split_request_ids method cleanly abstracts batch splitting logic
  • Leveraging get_forward_context().ubatch_slices for batch boundary detection is elegant
  • The parsing loop correctly handles mixed valid/invalid requests in a batch

Design Questions

Batch size constraint: The README mentions that batch size must match CUDA graph capture sizes. This is an important constraint that should be validated programmatically:

batch_size = args.batch_size
# Consider adding validation:
# valid_sizes = [1, 2, 4, 8, 16, 32, 64, 128, 256]
# if batch_size not in valid_sizes:
#     logger.warning("batch-size should be a power of 2 for CUDA graph compatibility")

Single-item batch workaround: The code has a workaround for single-item batches:

batch_input = batch[0] if len(batch) == 1 else batch

This suggests the API might benefit from consistently handling list inputs regardless of length.


4. Security & Safety

Resource Management

  • No memory leaks detected; tensors are properly managed
  • The batched approach should be more memory-efficient than sequential processing

Input Validation

  • Good validation for n % q != 0 (warmup/dummy runs)
  • Empty input handling is robust

Potential Issue

The _split_request_ids method falls back to returning [ids] if forward context is unavailable or ubatch_slices is None. This could mask issues if batching is expected but the context isn't properly set up:

def _split_request_ids(self, ids: torch.Tensor) -> list[torch.Tensor]:
    if is_forward_context_available():
        slices = get_forward_context().ubatch_slices
        if slices is not None and len(slices) > 1:
            ...
    return [ids]  # Fallback - could mask issues

Consider adding a debug log when falling back to single-batch mode.


5. Testing & Documentation

Documentation

  • README update is clear and explains the batching feature well
  • The constraint about stage-0 BS >= stage-1 BS is important and well-documented

Missing Test Coverage

The PR lacks unit tests for the new batching functionality. Consider adding tests for:

  1. Single request (baseline)
  2. Multiple requests with valid codes
  3. Mixed batch with some invalid requests
  4. Empty batch handling
  5. Edge case: context trim >= decoded length

Test Results

The benchmark results clearly show the improvement:

  • Baseline stage-1 times: 630ms, 274ms, 273ms
  • Batched stage-1 times: 578ms, 222ms, 227ms

The improvement is visible, though the first batch shows less improvement (possibly due to warmup).


6. Specific Suggestions

vllm_omni/model_executor/models/qwen3_tts/qwen3_tts_code2wav.py

Line 117-128: Add docstring explaining ubatch_slices:

def _split_request_ids(self, ids: torch.Tensor) -> list[torch.Tensor]:
    """Split concatenated input_ids into per-request segments using forward context.
    
    Uses ubatch_slices from forward context which contains either:
    - int: number of tokens in the request
    - slice object with token_slice attribute
    
    Returns list of per-request id tensors, or [ids] if not in batched context.
    """

Line 218-220: Add debug logging for silent edge case:

if cut < audio_np.shape[0]:
    audio_np = audio_np[cut:]
else:
    logger.debug(
        "Context trim %d >= decoded length %d for request %d; returning empty audio.",
        cut, audio_np.shape[0], idx
    )
    continue

examples/offline_inference/qwen3_tts/end2end.py

Line 263-281: Add comment explaining template approach:

# Load prompts from text file if provided.
# Use the default query as a template so task-specific fields
# (e.g. ref_audio for Base) are preserved; only override text.
if args.txt_prompts:

Line 295: Consider adding batch size validation:

batch_size = args.batch_size
if batch_size & (batch_size - 1) != 0:  # Not a power of 2
    logger.warning(
        "batch-size %d is not a power of 2; CUDA graph capture may fail. "
        "Use sizes like 1, 2, 4, 8, 16...", batch_size
    )

Import Changes

Consider reverting the import reordering changes in:

  • examples/offline_inference/qwen3_omni/end2end.py
  • tests/conftest.py
  • tests/engine/test_async_omni_engine_abort.py
  • tests/entrypoints/openai_api/test_image_server.py
  • tests/entrypoints/test_omni_llm.py
  • vllm_omni/benchmarks/patch/patch.py
  • vllm_omni/entrypoints/omni.py
  • vllm_omni/entrypoints/omni_stage.py
  • vllm_omni/entrypoints/openai/api_server.py

Or move them to a separate "style: fix import ordering" commit.


7. Approval Status

LGTM with suggestions

The core implementation is solid and the feature works as intended. The benchmark results demonstrate real performance improvements. However, I recommend addressing the following before merging:

Required:

  1. Add debug logging for the silent edge case (context trim >= decoded length)

Recommended:
2. Separate import reordering into a different commit
3. Add batch size validation with warning for non-power-of-2 values
4. Consider adding unit tests for the batching functionality

Optional:
5. Add docstring to _split_request_ids method
6. Add debug log when falling back to single-batch mode in _split_request_ids

The PR is functionally correct and ready for merge after addressing the required item. The recommended items would improve maintainability but are not blocking.


This review was generated automatically by the VLLM-Omni PR Reviewer Bot
using glm-5.

@JuanPZuluaga
Copy link
Copy Markdown
Contributor Author

JuanPZuluaga commented Feb 22, 2026

Thanks for the comments! @lishunyang12. I think i've addressed most of them.

Do you think we could improve a bit the e2e script? I could:

  • add some logic to run a larger benchmarking with the provided .txt
  • should we keep the benchmark text samples? Or do we indicate that one could pass a txt file?

Additionally,

  • would it be useful to run a larger benchmark and add the results here? I could run on a NVIDIA GeForce RTX 5090 Blackwell sm120.

@lishunyang12
Copy link
Copy Markdown
Collaborator

I think the e2e script is fine as-is for this PR — the txt file option is already there which is nice. Adding a larger benchmark result would be a plus but not blocking. Up to you if you want to include it.

Copy link
Copy Markdown
Collaborator

@lishunyang12 lishunyang12 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All previous comments addressed — LGTM.

@JuanPZuluaga
Copy link
Copy Markdown
Contributor Author

I think the e2e script is fine as-is for this PR — the txt file option is already there which is nice. Adding a larger benchmark result would be a plus but not blocking. Up to you if you want to include it.

Thanks @lishunyang12. Let's merge this and i'll run a benchmark together with: #1438

@lishunyang12
Copy link
Copy Markdown
Collaborator

Solve precommit and DCO please.

@JuanPZuluaga JuanPZuluaga force-pushed the feat/code2wav-batched-decode branch from 2e03970 to 6ffa2b8 Compare February 24, 2026 06:54
@hsliuustc0106
Copy link
Copy Markdown
Collaborator

@vllm-omni-reviewer

pablo added 11 commits February 24, 2026 07:07
Signed-off-by: pablo <pablo@agigo.ai>
Signed-off-by: pablo <pablo@agigo.ai>
Signed-off-by: pablo <pablo@agigo.ai>
Signed-off-by: pablo <pablo@agigo.ai>
Signed-off-by: pablo <pablo@agigo.ai>
Signed-off-by: pablo <pablo@agigo.ai>
Signed-off-by: pablo <pablo@agigo.ai>
Signed-off-by: pablo <pablo@agigo.ai>
Signed-off-by: pablo <pablo@agigo.ai>
Signed-off-by: pablo <pablo@agigo.ai>
Signed-off-by: pablo <pablo@agigo.ai>
@JuanPZuluaga JuanPZuluaga force-pushed the feat/code2wav-batched-decode branch from 5f46ae3 to 1a0d5fc Compare February 24, 2026 07:08
Signed-off-by: pablo <pablo@agigo.ai>
@github-actions
Copy link
Copy Markdown

🤖 VLLM-Omni PR Review

Code Review: [Qwen3TTS][Feat] Code2Wav batched decoding

1. Overview

This PR adds support for batched decoding in the Code2Wav stage of Qwen3TTS. The changes include:

  • Core model changes: Refactored qwen3_tts_code2wav.py to handle multiple requests in a single forward pass
  • Example script updates: Added --batch-size and --txt-prompts arguments to end2end.py
  • Configuration: Added qwen3_tts_batch.yaml with batched stage configs
  • Documentation: Updated README with batched decoding instructions

Overall Assessment: Positive. The implementation is well-structured and the benchmark results demonstrate meaningful speedup in stage-1 processing. However, there are a few areas that need attention.


2. Code Quality

Strengths

  • Good validation for power-of-two batch sizes (CUDA graph alignment)
  • Comprehensive handling of edge cases (empty inputs, warmup runs, invalid token counts)
  • Clear logging for debugging

Issues & Suggestions

vllm_omni/model_executor/models/qwen3_tts/qwen3_tts_code2wav.py:120-134

The _split_request_ids method has a complex condition that could be clearer:

if slices is not None and len(slices) > 1 and not any(hasattr(s, "token_slice") for s in slices):

Consider adding a docstring explaining when each branch is taken, or extracting the condition into a helper method like _is_batched_context().

vllm_omni/model_executor/models/qwen3_tts/qwen3_tts_code2wav.py:165-168

The assertion after batched decode could fail silently in production:

wavs, _ = tok.decode(valid_codes)
assert len(wavs) == len(valid_codes), f"Code2Wav returned {len(wavs)} waveforms for {len(valid_codes)} requests"

Suggestion: Consider raising a more descriptive RuntimeError instead of assert, as assertions can be disabled with -O flag.

examples/offline_inference/qwen3_tts/end2end.py:270-286

The file reading lacks error handling:

if args.txt_prompts:
    with open(args.txt_prompts) as f:
        lines = [line.strip() for line in f if line.strip()]

Suggestion: Add explicit error handling:

if args.txt_prompts:
    try:
        with open(args.txt_prompts) as f:
            lines = [line.strip() for line in f if line.strip()]
    except FileNotFoundError:
        raise FileNotFoundError(f"Prompts file not found: {args.txt_prompts}")
    if not lines:
        raise ValueError(f"No valid prompts found in {args.txt_prompts}")

3. Architecture & Design

Strengths

  • Clean separation between single-request and batched code paths
  • Good use of OmniOutput abstraction for multimodal outputs
  • Configuration-driven approach for enabling batching

Design Considerations

vllm_omni/model_executor/models/qwen3_tts/qwen3_tts_code2wav.py:144-226

The forward method has grown significantly. Consider extracting helper methods:

  1. _parse_request_codes(ids, q) - Parse single request's codes
  2. _build_empty_output(num_requests) - Create empty output structure
  3. _trim_audio(audio_np, ctx_frames, upsample) - Handle context trimming

This would improve readability and testability.

Batch size coordination between stages

The README correctly notes that both stages need max_batch_size >= batch_size. However, this constraint isn't validated at runtime. Consider adding a validation check in the Omni class initialization.


4. Security & Safety

Input Validation

examples/offline_inference/qwen3_tts/end2end.py:248-255

Good validation for batch size being a power of two. Consider also adding:

  • Upper bound check (e.g., batch_size <= 64) to prevent memory issues
  • Validation that batch_size matches the stage config's max_batch_size

Resource Management

The batched processing in end2end.py correctly iterates through batches, but there's no explicit cleanup between batches. If memory pressure is a concern, consider adding:

for batch_start in range(0, len(inputs), batch_size):
    batch = inputs[batch_start : batch_start + batch_size]
    omni_generator = omni.generate(batch, sampling_params_list=None)
    # ... process ...
    torch.cuda.empty_cache()  # Optional: explicit cleanup between batches

5. Testing & Documentation

Documentation

examples/offline_inference/qwen3_tts/README.md:90-104

The documentation is clear and helpful. Minor suggestion: add an example of the expected performance improvement with concrete numbers from the benchmark.

Test Coverage

Missing unit tests: The PR lacks unit tests for:

  1. _split_request_ids method with various slice configurations
  2. Batched forward pass with mixed valid/invalid requests
  3. Edge cases (empty batch, single-item batch, all-invalid batch)

Suggested test cases:

def test_split_request_ids_batched():
    # Test with multiple requests
    ...

def test_split_request_ids_single():
    # Test with single request (no batching)
    ...

def test_forward_empty_batch():
    # Test with all empty/invalid requests
    ...

def test_forward_mixed_batch():
    # Test with some valid, some invalid requests
    ...

Import Order Changes

The changes in tests/entrypoints/openai_api/test_image_server.py, tests/entrypoints/test_omni_llm.py, and vllm_omni/benchmarks/patch/patch.py appear to be lint fixes. These should ideally be in a separate PR to keep changes focused.


6. Specific Suggestions

qwen3_tts_code2wav.py:120-134

def _split_request_ids(self, ids: torch.Tensor) -> list[torch.Tensor]:
    """
    Split concatenated input_ids into per-request segments using forward context.
    ...
    """
    if not is_forward_context_available():
        return [ids]
    
    slices = get_forward_context().ubatch_slices
    # Check if we're in a batched context with simple integer slices
    if slices is None or len(slices) <= 1:
        return [ids]
    
    # Skip if slices contain token_slice objects (different batching mode)
    if any(hasattr(s, "token_slice") for s in slices):
        return [ids]
    
    boundaries = [0]
    for s in slices:
        boundaries.append(boundaries[-1] + s)
    return [ids[boundaries[i] : boundaries[i + 1]] for i in range(len(boundaries) - 1)]

end2end.py:270-286

Add validation:

if args.txt_prompts:
    if not os.path.exists(args.txt_prompts):
        raise FileNotFoundError(f"Prompts file not found: {args.txt_prompts}")
    with open(args.txt_prompts) as f:
        lines = [line.strip() for line in f if line.strip()]
    if not lines:
        raise ValueError(f"No valid prompts found in {args.txt_prompts}")

qwen3_tts_code2wav.py:165-168

Replace assertion with proper error:

wavs, _ = tok.decode(valid_codes)
if len(wavs) != len(valid_codes):
    raise RuntimeError(
        f"SpeechTokenizer returned {len(wavs)} waveforms for "
        f"{len(valid_codes)} requests"
    )

7. Approval Status

LGTM with suggestions

The PR is fundamentally sound and achieves its stated goal of enabling batched decoding for Code2Wav. The benchmark results demonstrate the expected performance improvement.

Before merging, please address:

  1. Required: Add error handling for file not found in end2end.py
  2. Required: Replace assert with proper exception in forward method
  3. Recommended: Add unit tests for the new batched functionality
  4. Optional: Consider extracting helper methods in the forward function for better readability
  5. Optional: Move import order fixes to a separate PR

The import reordering changes in test files and patch.py are unrelated to the main feature and should ideally be separated, but this is a minor concern.


This review was generated automatically by the VLLM-Omni PR Reviewer Bot
using glm-5.

pablo added 2 commits February 24, 2026 07:22
Signed-off-by: pablo <pablo@agigo.ai>
Signed-off-by: pablo <pablo@agigo.ai>
@JuanPZuluaga
Copy link
Copy Markdown
Contributor Author

@lishunyang12 pre-commit and DCO fixed.

@hsliuustc0106 hsliuustc0106 added the ready label to trigger buildkite CI label Feb 24, 2026
Copy link
Copy Markdown
Collaborator

@hsliuustc0106 hsliuustc0106 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

pablo and others added 2 commits February 24, 2026 11:21
Copy link
Copy Markdown
Collaborator

@linyueqian linyueqian left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@hsliuustc0106 hsliuustc0106 merged commit 4de077e into vllm-project:main Feb 24, 2026
7 checks passed
@Gaohan123
Copy link
Copy Markdown
Collaborator

@JuanPZuluaga Could you please resolve the merged CI failure? Thanks. https://buildkite.com/vllm/vllm-omni/builds/3246/steps/canvas

@hsliuustc0106
Copy link
Copy Markdown
Collaborator

@JuanPZuluaga Could you please resolve the merged CI failure? Thanks. https://buildkite.com/vllm/vllm-omni/builds/3246/steps/canvas

I think the failures are not related to this PR and known to us. they also fail in other PR merges

@JuanPZuluaga JuanPZuluaga deleted the feat/code2wav-batched-decode branch February 26, 2026 13:54
with1015 added a commit to with1015/vllm-omni that referenced this pull request Apr 6, 2026
* [Frontend][Model] Support batch request with refined OmniDiffusionReq… (#797)

Signed-off-by: Huang, Zeyu <11222265+fhfuih@users.noreply.github.com>

* [Model]: add FLUX.1-dev model (#853)

* [BugFix] ignore mm data from stages to async omni (#954)

Signed-off-by: dengyunyang <584797741@qq.com>

* Revert "[BugFix] ignore mm data from stages to async omni" (#1023)

* [Bugfix] Modify output to model_runner_output (#1026)

Signed-off-by: gcanlin <canlinguosdu@gmail.com>

* [Feature] Support cache-dit for Wan 2.2 inference (#1021)

Signed-off-by: samithuang <285365963@qq.com>
Signed-off-by: Samit <285365963@qq.com>

* [Doc]Format profiling doc (#993)

Signed-off-by: lishunyang <lishunyang12@163.com>
Signed-off-by: Hongsheng Liu <liuhongsheng4@huawei.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* [Hardware] Support platforms and plugin system (#774)

Signed-off-by: gcanlin <canlinguosdu@gmail.com>

* [Core]: KV Cache Transfer Encapsulation (#979)

Signed-off-by: princepride <wangzhipeng628@gmail.com>
Signed-off-by: 汪志鹏 <wangzhipeng628@gmail.com>

* [Test]Delete skip mark for amd ci test and fix CI failure (#927)

Signed-off-by: wangyu31577 <wangyu31577@hundsun.com>
Signed-off-by: Hongsheng Liu <liuhongsheng4@huawei.com>
Co-authored-by: wangyu31577 <wangyu31577@hundsun.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* [Bugfix][Doc]Specify Qwen3-TTS model name for each task type (#1036)

Signed-off-by: Kyle Huang <yellowsea@gmail.com>

* [Misc] pin version of fa3-fwd (#1051)

Signed-off-by: zjy0516 <riverclouds.zhu@qq.com>

* [CI] [ROCm] Add more AMD CI tests (#1039)

Signed-off-by: tjtanaa <tunjian.tan@embeddedllm.com>

* [Bugfix] fix qwen image layerd in dummy run (#1027)

Signed-off-by: zjy0516 <riverclouds.zhu@qq.com>

* [BugFix] Fix noisy output without setting a seed in Qwen Image (#1043)

Signed-off-by: natureofnature <wzliu@connect.hku.hk>

* [bugfix] remove vllm speech route (#1060)

Signed-off-by: linyueqian <linyueqian@outlook.com>

* [Debug] Update GLM-Image Pipeline (#1049)

Co-authored-by: root <root@hk01dgx028.cm.cluster>

* [Diffusion][Bugfix] Fix the flash_attn backends selection logic (#983)

Signed-off-by: mxuax <mxuax@connect.ust.hk>
Signed-off-by: XU Mingshi <91017482+mxuax@users.noreply.github.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* [BugFix] Fix the accuracy issue of multimodal input. (#1020)

Signed-off-by: amy-why-3459 <wuhaiyan17@huawei.com>
Co-authored-by: Rein Yang <ruiruyang2@gmail.com>

* [Bugfix] Set VaeImageProcessor `do_convert_rgb` True (#1032)

Signed-off-by: gcanlin <canlinguosdu@gmail.com>

* [feat]: adapt batch request for flux (#1028)

Signed-off-by: wuzhongjian wuzhongjian_yewu@cmss.chinamobile.com

* [CI] Change Qwen3 Omni stage placement strategy  (#1072)

Signed-off-by: ZeldaHuang <hzm414167@alibaba-inc.com>

* [BugFix] Fix to use correct attn backend (#1038)

Signed-off-by: Divyansh Singhvi <divyanshsinghvi@gmail.com>

* [Perf] Qwen3 Omni talker mtp optimization (#1005)

Signed-off-by: ZeldaHuang <hzm414167@alibaba-inc.com>
Signed-off-by: Hongsheng Liu <liuhongsheng4@huawei.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* [Wan2.2] Optimize memory usage with conditional transformer loading (#980)

Signed-off-by: Lin, Fanli <fanli.lin@intel.com>
Signed-off-by: Samit <285365963@qq.com>
Co-authored-by: Samit <285365963@qq.com>

* [Feat] Support XPU Backend in vLLM-Omni (#191)

Signed-off-by: Fanli Lin <fanli.lin@intel.com>
Signed-off-by: Fanli Lin <fanli0116@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* [Fix] stabilize diffusion images LoRA E2E across CI drift (#1075)

Signed-off-by: dongbo910220 <1275604947@qq.com>

* [Bugfix][Test] Re-enable the log simple tests (#1065)

Signed-off-by: gcanlin <canlinguosdu@gmail.com>

* [Bugfix] pr conflict fix, bugfix ignore mm data from stages to async omni (#1025)

Signed-off-by: dengyunyang <584797741@qq.com>

* [Doc][Bagel] Add BAGEL-7B-MoT documentation and edit the default stage configuration (#987)

Signed-off-by: Ding Zuhao <e1583181@u.nus.edu>
Signed-off-by: jzz <e1583181@u.nus.edu>

* [Fix] Increase max wait time for server readiness to accommodate model loading (#1089)

Signed-off-by: Andy Zhou <46011930+AndyZhou952@users.noreply.github.com>

* [Benchmark] Add vLLM-Omni Omni model online benchmark (#780)

Signed-off-by: wangyu31577 <wangyu31577@hundsun.com>
Signed-off-by: wangyu <53896905+yenuo26@users.noreply.github.com>
Co-authored-by: wangyu31577 <wangyu31577@hundsun.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* [Bugfix] Remove Mooncake/Yuanrong connector import warning (#1091)

Signed-off-by: natureofnature <wzliu@connect.hku.hk>

* fix: UnboundLocalError for role in streaming audio/image responses (#784)

Signed-off-by: Pierre Le Guen <26087574+PierreLeGuen@users.noreply.github.com>

* [Misc] update wechat image (#1096)

* [Feature] Support DiT Layerwise (Blockwise) CPU Offloading (#858)

Signed-off-by: yuanheng <jonathan.zhaoyh@gmail.com>
Signed-off-by: Hongsheng Liu <liuhongsheng4@huawei.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* [BugFix] Modify max_tokens and modify the log and fix #1103 (#1097)

Signed-off-by: amy-why-3459 <wuhaiyan17@huawei.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* [BugFix] Fix modulate_index shape error in Qwen-Image-Edit Task (#1100)

Signed-off-by: mxuax <mxuax@connect.ust.hk>
Signed-off-by: XU Mingshi <91017482+mxuax@users.noreply.github.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* [Platform] Add supports_torch_inductor interface (#1108)

Signed-off-by: gcanlin <canlinguosdu@gmail.com>

* [BugFix] Fix Qwen3 Omni talker mtp torch.compile startup error (#1104)

Signed-off-by: ram16g <anlianfengjie@163.com>
Signed-off-by: ZeldaHuang <hzm414167@alibaba-inc.com>
Co-authored-by: ram16g <anlianfengjie@163.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* [Bugfix] fix request_id of image generation in api server (#1112)

Signed-off-by: zjy0516 <riverclouds.zhu@qq.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* [Perf]: CFG parallel abstraction (#851)

Signed-off-by: Didan Deng <33117903+wtomin@users.noreply.github.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* [BugFix] Fix Qwen3 TTS 0.6B profile run hang (#995) (#1082)

* [CI] [ROCm] Quick fix amd ci (#1116)

Signed-off-by: tjtanaa <tunjian.tan@embeddedllm.com>

* [Bugfix] fix benchmark audio timing error and add benchmark test (#1109)

Signed-off-by: wangyu31577 <wangyu31577@hundsun.com>
Co-authored-by: wangyu31577 <wangyu31577@hundsun.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* [Bugfix][Qwen3TTS] Load speaker_id/voices from model configuration (#1079)

Signed-off-by: pablo <juanz9312@gmail.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>
Co-authored-by: WeiQing Chen <40507679+david6666666@users.noreply.github.com>

* [NPU] Align with GPUModelRunner (#1114)

Signed-off-by: gcanlin <canlinguosdu@gmail.com>

* [FEATURE] /v1/images/edit interface (#1101)

Signed-off-by: dengyunyang <584797741@qq.com>

* [Bugfix] Fix NPU SDPA attention mask shape and semantics (#1031)

Signed-off-by: gcanlin <canlinguosdu@gmail.com>
Co-authored-by: muziyuhui666 <111362884+muziyuhui666@users.noreply.github.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* [TeaCache]: Add Coefficient Estimation (#940)

Signed-off-by: princepride <wangzhipeng628@gmail.com>
Signed-off-by: 汪志鹏 <wangzhipeng628@gmail.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* [CI]: Bagel E2E Smoked Test (#1074)

Signed-off-by: princepride <wangzhipeng628@gmail.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* [Misc] Bump version to 0.14.0 (#1128)

Signed-off-by: Roger Wang <hey@rogerw.io>

* [Doc] First stable release of vLLM-Omni (#1129)

Signed-off-by: Roger Wang <hey@rogerw.io>

* [Misc] Align error handling with upstream vLLM v0.14.0 (#1122)

Signed-off-by: anna <lee.anna@navercorp.com>
Co-authored-by: anna <lee.anna@navercorp.com>

* [Feature] add Tensor Parallelism to LongCat-Image(-Edit) (#926)

Signed-off-by: Rustam Khadipash <16683750+hadipash@users.noreply.github.com>

* [CI] Temporarily remove slow tests. (#1143)

Signed-off-by: Alicia <115451386+congw729@users.noreply.github.com>
Signed-off-by: princepride <wangzhipeng628@gmail.com>
Co-authored-by: princepride <wangzhipeng628@gmail.com>

* [CI] Refactor test_sequence_parallel.py and add a warmup run for more accurate performance stat (#1165)

Signed-off-by: mxuax <mxuax@connect.ust.hk>
Signed-off-by: XU Mingshi <91017482+mxuax@users.noreply.github.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* Dev/rebase v0.15.0 (#1159)

Signed-off-by: Taichang Zhou <tzhouam@connect.ust.hk>
Signed-off-by: tzhouam <tzhouam@connect.ust.hk>
Signed-off-by: princepride <wangzhipeng628@gmail.com>
Co-authored-by: 汪志鹏 <wangzhipeng628@gmail.com>

* Docs update paper link (#1169)

Signed-off-by: hsliu <liuhongsheng4@huawei.com>
Signed-off-by: hsliu_ustc <hsliu_ustc@noreply.gitcode.com>
Co-authored-by: hsliu_ustc <hsliu_ustc@noreply.gitcode.com>

* [Debug] Clear Dockerfile.ci to accelerate build image (#1172)

Signed-off-by: tzhouam <tzhouam@connect.ust.hk>

* [Debug] Correct Unreasonable Long Timeout (#1175)

Signed-off-by: tzhouam <tzhouam@connect.ust.hk>

* [Doc]Fix - Align with repo. (#1176)

Signed-off-by: Alicia <115451386+congw729@users.noreply.github.com>

* [Bugfix][Qwen-Image-Edit] Add a warning log for none negative_prompt (#1170)

Signed-off-by: gcanlin <canlinguosdu@gmail.com>

* [Bugfix] fix qwen image oom (#1168)

Signed-off-by: zjy0516 <riverclouds.zhu@qq.com>

* [Hardware] Disable compile of diffusion on XPU (#1148)

Signed-off-by: zhenwei-intel <zhenwei.liu@intel.com>

* [Doc] Fix vLLM version in user docs (#1179)

Signed-off-by: Yuanheng Zhao <jonathan.zhaoyh@gmail.com>

* [Refactor] Refactor async chunk and fix the shape mismatch issue (#1151)

Signed-off-by: amy-why-3459 <wuhaiyan17@huawei.com>

* bugfix: /images/edits endpoint fails pipeline data format check (#1141)

Signed-off-by: Huang, Zeyu <11222265+fhfuih@users.noreply.github.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* [Perf] resolving prolonged `cudastreamsynchronize` execution in z image processing (#1105)

Signed-off-by: erfgss <97771661+erfgss@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* [Bugfix] modify RTF use audio_e2e/audio_duration (#1157)

Signed-off-by: wangyu31577 <wangyu31577@hundsun.com>
Co-authored-by: wangyu31577 <wangyu31577@hundsun.com>

* [Doc] Highlight paper & slides. (#1186)

Signed-off-by: Alicia <115451386+congw729@users.noreply.github.com>

* [chore] Remove zmq context initialize (#1187)

Signed-off-by: xiedeyantu <czjourney@163.com>

* [NPU] Update Dockerfile and docs for v0.14.0 (#671)

Signed-off-by: gcanlin <canlinguosdu@gmail.com>

* [Bugfix] E2E metric incorrect qwen3-omni with async chunk feature (#1018)

Signed-off-by: Junhong Liu <98734602+LJH-LBJ@users.noreply.github.com>
Signed-off-by: Junhong Liu <ljh_lbj@163.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* [Doc] opt doc (#1118)

Signed-off-by: David Chen <530634352@qq.com>

* [Bugfix] Fix tp+sp accuracy, incorrect process group mapping (#1178)

Signed-off-by: David Chen <530634352@qq.com>

* [Feature] Enable use_audio_in_video for Qwen 3 Omni Online (#1198)

Signed-off-by: tzhouam <tzhouam@connect.ust.hk>

* [Bugfix] async_chunk rebase v0.15.0 (#1195)

Signed-off-by: amy-why-3459 <wuhaiyan17@huawei.com>

* [feature]: support flux cache_dit (#1145)

Co-authored-by: Jiangyun Zhu <riverclouds.zhu@qq.com>

* [CI] Add CI branch coverage calculation,  fix statement coverage results and add log before test for buildkite  log group (#1120)

Signed-off-by: wangyu31577 <wangyu31577@hundsun.com>
Co-authored-by: wangyu31577 <wangyu31577@hundsun.com>

* [Wan 2.2][Diffusion] Add TP Support (#964)

Signed-off-by: weichen <calvin_zhu0210@outlook.com>

* [Hardware] [Feat] Setup platform dependent package installation (#1046)

Signed-off-by: tjtanaa <tunjian.tan@embeddedllm.com>
Co-authored-by: PopSoda2002 <zhouhp.me@gmail.com>
Co-authored-by: gcanlin <canlinguosdu@gmail.com>

* [XPU] Fix XPU UTs for basic coverage (#1164)

Signed-off-by: Yan Ma <yan.ma@intel.com>

* [Test] Add BuildKite test-full script for full CI. (#867)

Signed-off-by: wangyu31577 <wangyu31577@hundsun.com>
Co-authored-by: wangyu31577 <wangyu31577@hundsun.com>

* [Refactor] Reuse upstream Qwen3MoeSparseMoeBlock (#1202)

Signed-off-by: gcanlin <canlinguosdu@gmail.com>

* [Bugfix] Fix wan2.2 ti2v (#1221)

Signed-off-by: mxuax <mxuax@connect.ust.hk>
Signed-off-by: XU Mingshi <91017482+mxuax@users.noreply.github.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* [Bugfix] Fix '--max-generated-image-size' cli args type (#1249)

Signed-off-by: ApsarasX <apsarax@outlook.com>

* [Bugfix] Ensure seed=0 is correctly handled in image edit (#1248)

Signed-off-by: ApsarasX <apsarax@outlook.com>

* [Docs] Add example image download step to Image-To-Video examples (#1258)

Signed-off-by: lishunyang <lishunyang12@163.com>

* [Bugfix] Fix padding bug in 12Hz tokenizer ConvTranspose1d decode (#1241)

Signed-off-by: linyueqian <linyueqian@outlook.com>

* [bugfix] Fix multimodal_output property to check completion outputs where audio data is attached (#1203)

Signed-off-by: linyueqian <linyueqian@outlook.com>

* [Doc] Update QA relevant to quantization  (#1257)

Signed-off-by: lishunyang <lishunyang12@163.com>

* [Bugfix] Fix Doc link Rrror (#1263)

Signed-off-by: lishunyang <lishunyang12@163.com>

* Process-Scoped GPU Memory Accounting (#1204)

Signed-off-by: Divyansh Singhvi <divyanshsinghvi@gmail.com>

* [ComfyUI]: ComfyUI integration (#1113)

Signed-off-by: Huang, Zeyu <11222265+fhfuih@users.noreply.github.com>

* fix: add diffusion offload args to OmniConfig group instead of serve_parser (#1271)

Signed-off-by: Chenguang ZHENG <645327136@qq.com>

* [Doc] Adding models/pipelines/features Tutorial (#1196)

Signed-off-by: Didan Deng <33117903+wtomin@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: dongbo910220 <32610838+dongbo910220@users.noreply.github.com>

* [CI] Add env variable check for nightly CI  (#1281)

Signed-off-by: Alicia <115451386+congw729@users.noreply.github.com>

* [CI] Add pytest markers to current tests and update the doc. (#577)

Signed-off-by: Alicia <115451386+congw729@users.noreply.github.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* [Diffusion][Perf] Remove Redundant Communication Cost by Refining SP Hook Design (#1275)

Signed-off-by: mxuax <mxuax@connect.ust.hk>
Signed-off-by: XU Mingshi <91017482+mxuax@users.noreply.github.com>

* [Feature] Opt metrics structure (#891)

Signed-off-by: Junhong Liu <98734602+LJH-LBJ@users.noreply.github.com>
Signed-off-by: Junhong Liu <ljh_lbj@163.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* [Test] Add example test cases for omni online (#1086)

Signed-off-by: wangyu31577 <wangyu31577@hundsun.com>
Signed-off-by: Hongsheng Liu <liuhongsheng4@huawei.com>
Signed-off-by: yenuo26 <410167048@qq.com>
Co-authored-by: wangyu31577 <wangyu31577@hundsun.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* [CI] Reduce the time for Diffusion Sequence Parallelism Test (#1283)

Signed-off-by: Alicia <115451386+congw729@users.noreply.github.com>

* [Model] SupportHunyuanImage3 Diffusion Model in vllm-omni (#1085)

Signed-off-by: Semmer2 <semmer@live.cn>

* [Chore] Update copyright year. (#1256)

Signed-off-by: lishunyang <lishunyang12@163.com>

* [feature]: support Flux.1-dev CFG-Parallel (#1269)

* [Bugfix] Fix 'NoneType' AttributeError in stable-diffusion model detect (#1254)

Signed-off-by: Yan Ma <yan.ma@intel.com>

* [Doc] Update Qwen3-TTS docs for consistency with Omni examples (#1226)

Signed-off-by: linyueqian <linyueqian@outlook.com>
Signed-off-by: Yueqian Lin <70319226+linyueqian@users.noreply.github.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* [Fix]Ensure HuggingFace downloads complete before initialization. (#1213)

Signed-off-by: zhou zhuoxin <zhouzhuoxin1508@outlook.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* [BugFix] Fixed the issue where ignore_eos was not working. (#1286)

Signed-off-by: amy-why-3459 <wuhaiyan17@huawei.com>

* [Test] Add e2e tests for Qwen3-TTS speech endpoint (#1206)

Signed-off-by: linyueqian <linyueqian@outlook.com>
Signed-off-by: Yueqian Lin <70319226+linyueqian@users.noreply.github.com>

* [Feat]: support VAE patch parallelism (#756)

Signed-off-by: dongbo910220 <1275604947@qq.com>
Co-authored-by: hsliuustc0106 <liuhongsheng4@huawei.com>

* [CI] Disable Qwen3-TTS E2E Test in pipeline.yml (#1306)

Signed-off-by: Gao Han <hgaoaf@connect.ust.hk>

* [Misc] Add per-request generator_device to online image gen and edit (#1183)

Signed-off-by: gcanlin <canlinguosdu@gmail.com>

* [Bagel]: Support TP (#1293)

Signed-off-by: princepride <wangzhipeng628@gmail.com>

* [Bugfix] Fix image edit RoPE crash when explicit height/width are provided (#1265)

Signed-off-by: lishunyang <lishunyang12@163.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* [Doc] Sync (#1216)

Signed-off-by: Alicia <115451386+congw729@users.noreply.github.com>

* [Bugfix] fix precision issues of qwen3-omni when enable async_chunk without system prompt (#1288)

Signed-off-by: Rein Yang <ruiruyang2@gmail.com>

* [Debug] Add trigger to concurrent stage init (#1274)

Signed-off-by: tzhouam <tzhouam@connect.ust.hk>

* [Bugfix][Qwen3-TTS] Fix task type (#1317)

Signed-off-by: Ekagra Ranjan <3116519+ekagra-ranjan@users.noreply.github.com>

* Unifying CLI Argument Naming Style (#1309)

Signed-off-by: Didan Deng <33117903+wtomin@users.noreply.github.com>

* [Bugfix][Qwen3-TTS] Preserve original model ID in omni_snapshot_download (#1318)

* [CI] Run nightly tests. (#1333)

Signed-off-by: Alicia <115451386+congw729@users.noreply.github.com>

* [Feature]: FP8 Quantization Support for DiT  (#1034)

Signed-off-by: lishunyang <lishunyang12@163.com>
Signed-off-by: SYLAR <125541396+lishunyang12@users.noreply.github.com>

* Fix yield token metrics and opt metrics record stats (#1292)

* [Test] L2 & L3 Test Case Stratification Design for Omni Model (#1272)

Signed-off-by: wangyu31577 <wangyu31577@hundsun.com>
Signed-off-by: yenuo26 <410167048@qq.com>
Signed-off-by: wangyu <53896905+yenuo26@users.noreply.github.com>
Signed-off-by: Hongsheng Liu <liuhongsheng4@huawei.com>
Co-authored-by: wangyu31577 <wangyu31577@hundsun.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* [Pref] Support Qwen3 Omni code2wav batch infernce with async chunk (#1246)

Signed-off-by: ZeldaHuang <hzm414167@alibaba-inc.com>
Signed-off-by: Hongsheng Liu <liuhongsheng4@huawei.com>
Signed-off-by: Ziming Huang <1520787127@qq.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* update qwen3-omni & qwen2.5-onmi openai client (#1304)

Signed-off-by: Rein Yang <ruiruyang2@gmail.com>

* [Feature] Support Wan2.2 T2V and I2V Online Serving with OpenAI /v1/videos API (#1073)

Signed-off-by: samithuang <285365963@qq.com>
Signed-off-by: Samit <285365963@qq.com>
Signed-off-by: SamitHuang <285365963@qq.com>
Co-authored-by: Flora Feng <4florafeng@gmail.com>

* [Feature] add Tensor Parallelism to SD_3.5 (#1336)

Signed-off-by: GG-li <3226868735@qq.com>

* [Feature]async scheduling to overlap chunk IO and compute (#951)

Signed-off-by: CHEN <116010019@link.cuhk.edu.cn>
Co-authored-by: Bhanu068 <voutharoja.bhanu06@gmail.com>
Co-authored-by: Gao Han <gaohan19@huawei.com>

* [Bugfix] reused metrics to modify the API Server token statistics in Stream Response (#1301)

Signed-off-by: John Liu BUAA <liukecheng97@gmail.com>

* Refactor CPU Offloading Backend Pattern (#1223)

Signed-off-by: yuanheng <jonathan.zhaoyh@gmail.com>
Signed-off-by: Yuanheng Zhao <jonathan.zhaoyh@gmail.com>
Signed-off-by: Samit <285365963@qq.com>
Co-authored-by: Samit <285365963@qq.com>

* [DOC] Doc for CI test - Details about five level stucture and some other files. (#1167)

Signed-off-by: Alicia <115451386+congw729@users.noreply.github.com>
Co-authored-by: yenuo26 <410167048@qq.com>

* [Bugfix] remove Tongyi-MAI/Z-Image-Turbo related test from L2 ci (#1348)

Signed-off-by: dengyunyang <584797741@qq.com>

* [Misc] wechat image update (#1354)

Signed-off-by: David Chen <530634352@qq.com>

* [Misc] Support WorkerWrapperBase and CustomPipeline for Diffusion Worker (#764)

Signed-off-by: knlnguyen1802 <knlnguyen1802@gmail.com>

* [Feature][Bugfix] Add CFG feature to Bagel (#1310)

Signed-off-by: Ding Zuhao <e1583181@u.nus.edu>
Signed-off-by: 汪志鹏 <wangzhipeng628@gmail.com>
Co-authored-by: 汪志鹏 <wangzhipeng628@gmail.com>

* [Feature]: Diffusion sleep to use process level memory calculation (#1276)

Signed-off-by: Divyansh Singhvi <divyanshsinghvi@gmail.com>
Signed-off-by: 汪志鹏 <wangzhipeng628@gmail.com>
Signed-off-by: dsinghvi <divyanshsinghvi@gmail.com>
Co-authored-by: 汪志鹏 <wangzhipeng628@gmail.com>

* change qwen3-omni open cudagraph by default (#1352)

Signed-off-by: Rein Yang <ruiruyang2@gmail.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* [XPU] Update Bagel's flash_attn_varlen_func to fa utils (#1295)

Signed-off-by: zhenwei-intel <zhenwei.liu@intel.com>

* [Test] Add Omni Model Performance Benchmark Test (#1321)

Signed-off-by: yenuo26 <410167048@qq.com>
Signed-off-by: wangyu <53896905+yenuo26@users.noreply.github.com>

* [BugFix]: Revert utils change (#1369)

Signed-off-by: princepride <wangzhipeng628@gmail.com>

* [Rebase] Rebase to vllm v0.16.0 (#1357)

Signed-off-by: Taichang Zhou <tzhouam@connect.ust.hk>
Signed-off-by: tzhouam <tzhouam@connect.ust.hk>
Signed-off-by: princepride <wangzhipeng628@gmail.com>
Co-authored-by: 汪志鹏 <wangzhipeng628@gmail.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>
Co-authored-by: Isotr0py <Isotr0py@outlook.com>
Co-authored-by: ZJY0516 <zhu.jiangyun@foxmail.com>
Co-authored-by: youkaichao <youkaichao@gmail.com>

* [Test] Fix expansion and example test case for qwen3-omni (#1358)

Signed-off-by: yenuo26 <410167048@qq.com>

* [v0.16.0][BUG FIX]Fix hunyuan MOE after update to 0.16.0 (#1401)

Signed-off-by: Chendi Xue <chendi.xue@intel.com>

* [0.16.0] remove cuda hard-code for Hunyuan Image3 (#1402)

Signed-off-by: Chendi Xue <chendi.xue@intel.com>

* [XPU] Add XPU Dockerfile and related docs (#1162)

Signed-off-by: Yan Ma <yan.ma@intel.com>
Signed-off-by: Daniel Huang <daniel1.huang@intel.com>
Co-authored-by: Daniel Huang <daniel1.huang@intel.com>

* [Bugfix] Fix Hardcoded Datatypes in Z-image (#1393)

Signed-off-by: Alex Brooks <albrooks@redhat.com>

* [Feature] : Support disaggregated inference pipeline for Qwen3_TTS (#1161)

Signed-off-by: Sy03 <1370724210@qq.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* [Feature] Add automated PR reviewer bot with GLM integration (#1424)

Signed-off-by: hsliu <liuhongsheng4@huawei.com>
Signed-off-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>

* [Misc] Add Qwen2.5-Omni-3B model support to Gradio demo (#1382)

Signed-off-by: UsamaKenway <usamakenway@gmail.com>

* [misc] Feature/pr reviewer auto trigger&update model (#1431)

Signed-off-by: hsliu <liuhongsheng4@huawei.com>
Signed-off-by: Claude Opus 4.6 <noreply@anthropic.com>
Signed-off-by: Hunter Liu <hunter@liu.sh>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>

* Revert "[misc] Feature/pr reviewer auto trigger&update model" (#1432)

* [Doc] Update GPU installation commands (#1434)

* [ROCM] [CI] fix dockerfile.rocm to support nightly build and also fix amd ci v0.16.0rc1 (#1380)

Signed-off-by: tjtanaa <tunjian.tan@embeddedllm.com>

* [Feature][BAGEL] Combine multi-branch cfg into a single batch to accelerate inference. (#1429)

Signed-off-by: Ding Zuhao <e1583181@u.nus.edu>
Co-authored-by: 汪志鹏 <wangzhipeng628@gmail.com>

* [Feat]: add ASCII art logo for vLLM-Omni  (#1430)

* [Bug] [Bagel] Fix kv transfer bug (#1437)

Signed-off-by: Ding Zuhao <e1583181@u.nus.edu>
Co-authored-by: Wang Zhipeng: princepride <wangzhipeng628@gmail.com>

* [CI] Set L2 & L3 tests running conditions. (#1344)

Signed-off-by: Alicia <115451386+congw729@users.noreply.github.com>

* [Feature] vLLM-Omni RDMA connector (#1019)

Signed-off-by: natureofnature <wzliu@connect.hku.hk>

* [Minor][Refactor] Pass seq_token_counts explicitly (#1425)

Signed-off-by: gcanlin <canlinguosdu@gmail.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* [Misc] Extend Diffusion Benchmark script to other backends (#875)

Signed-off-by: NickLucche <nlucches@redhat.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* [Feature] Support Stage Based Deployment CLI (#939)

Signed-off-by: wuhang <wuhang6@huawei.com>
Signed-off-by: princepride <wangzhipeng628@gmail.com>
Signed-off-by: wuhang <whlbx@hotmail.com>
Co-authored-by: 汪志鹏 <wangzhipeng628@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* [Doc] Optimize vLLM-Omni metrics documentation (#1311)

Signed-off-by: Junhong Liu <98734602+LJH-LBJ@users.noreply.github.com>
Signed-off-by: Junhong Liu <ljh_lbj@163.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* [Bugfix]  Forward all vllm-omni serve command parameters to model (#985)

Signed-off-by: Junhong Liu <98734602+LJH-LBJ@users.noreply.github.com>
Signed-off-by: Junhong Liu <ljh_lbj@163.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* [Doc]: Add bagel single/multi node usage with mooncake document (#1450)

* [Qwen3TTS][Feat] Code2Wav batched decoding (#1426)

Signed-off-by: pablo <pablo@agigo.ai>
Co-authored-by: pablo <pablo@agigo.ai>

* [CI] Remove overwhelming debug log (#1463)

Signed-off-by: tzhouam <tzhouam@connect.ust.hk>

* [Misc] update wechat image (#1464)

Signed-off-by: David Chen <530634352@qq.com>

* [Doc] Refine Diffusion Tutorial Documents (#1305)

Signed-off-by: Didan Deng <33117903+wtomin@users.noreply.github.com>

* [Bugfix] Robust Audio Data Handling in _create_audio_choice (#1222)

Signed-off-by: Junhong Liu <98734602+LJH-LBJ@users.noreply.github.com>

* [Bugfix]: Fix merging updated additional information to ensure dict type (#1296)

Signed-off-by: Shijin Zhang <75300765+Dovis01@users.noreply.github.com>

* [Model]Add new nextstep_1(Diffusion) model(only T2I) (#612)

Signed-off-by: Dong Wang <dongw2019@gmail.com>
Signed-off-by: sniper35 <dongw2019@gmail.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* [Bugfix] Add TTS configuration options (#1177)

Signed-off-by: Yanick Schraner <yanick.schraner@bs.ch>

* [Debug] Multi-Request for Qwen 3 Omni use_audio_in_video (#1433)

Signed-off-by: tzhouam <tzhouam@connect.ust.hk>

* [Bugfix] Fix case-sensitive task_type matching in Qwen3TTSModelForGeneration (#1455)

Signed-off-by: Sangchun Ha <seomk9896@gmail.com>

* [BugFix] process request.num_cached_tokens if it equals to the initial value  (#1468)

Signed-off-by: Junhong Liu <98734602+LJH-LBJ@users.noreply.github.com>
Co-authored-by: Gao Han <gaohan19@huawei.com>

* [Bugfix] Fix SDPA attention mask dtype and shape (Fix #857) (#1349)

Signed-off-by: jader <yjader@foxmail.com>

* [Test] Reduce Perf test case and fix modify stage config (#1449)

Signed-off-by: yenuo26 <410167048@qq.com>

* [NPU] Upgrade to v0.16.0 (#1375)

Signed-off-by: gcanlin <canlinguosdu@gmail.com>

* [CI] Update Dockerfile for vllm-omni CI image and remove obsolete dep… (#1491)

Signed-off-by: tzhouam <tzhouam@connect.ust.hk>

* [Fix][Chore] Qwen3-TTS Modeling Minor Code Sanity Improvements (#1482)

Signed-off-by: yuanheng <jonathan.zhaoyh@gmail.com>

* [Bugfix] Fix tuple/list KV cache extraction crash (#1405)

Signed-off-by: junuxyz <216036880+junuxyz@users.noreply.github.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* [Doc] format lora related docs for the user's end (#1009)

Signed-off-by: AndyZhou952 <jzhoubc@connect.ust.hk>
Signed-off-by: Andy Zhou <46011930+AndyZhou952@users.noreply.github.com>

* [Feature] Support Wan2.2 output with irregular shapes (#1279)

Signed-off-by: gcanlin <canlinguosdu@gmail.com>

* [Misc] Migrate L1 tests to use pytest-mock (#1315)

Signed-off-by: Yuanheng Zhao <jonathan.zhaoyh@gmail.com>
Signed-off-by: yuanheng <jonathan.zhaoyh@gmail.com>

* [Bugfix] Fix LoRA Scaling on Active Adapters (#1421)

Signed-off-by: Alex Brooks <albrooks@redhat.com>

* [Bugfix] fix record audio generated frame in offline infer (#1312)

Signed-off-by: Junhong Liu <98734602+LJH-LBJ@users.noreply.github.com>
Signed-off-by: Junhong Liu <ljh_lbj@163.com>

* [Model] Support OmniGen2 (#513)

Signed-off-by: Yupu <feng.yu.pu0330@gmail.com>

* [Bugfix][Qwen3TTS] (#1289)

Signed-off-by: pablo <juanz9312@gmail.com>
Co-authored-by: Gao Han <gaohan19@huawei.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* Use pull through cache image for H100 pool (#1518)

Signed-off-by: Kevin H. Luu <khluu000@gmail.com>

* [ROCm] [CI] [Docker] Point to use the latest vLLM v0.16.0 stable version (#1500)

Signed-off-by: tjtanaa <tunjian.tan@embeddedllm.com>

* [Bugfix] fix offline text_to_image error from #1009 (#1515)

Signed-off-by: David Chen <530634352@qq.com>

* [XPU] Enable FLASH_ATTN on XPU (#1332)

Signed-off-by: Yan Ma <yan.ma@intel.com>

* Revert gpu_1 job to use regular image (#1521)

Signed-off-by: Kevin H. Luu <khluu000@gmail.com>

* [Chore] remove unused logger in omni_diffusion (#531) (#1509)

Signed-off-by: Huang, Zeyu <11222265+fhfuih@users.noreply.github.com>
Co-authored-by: Gao Han <gaohan19@huawei.com>

* [Qwen3TTS][Feat] Streaming output (#1438)

Signed-off-by: pablo <pablo@agigo.ai>
Signed-off-by: Hongsheng Liu <liuhongsheng4@huawei.com>
Co-authored-by: pablo <pablo@agigo.ai>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* [Bugfix] Race condition in MultiprocExecutor when concurent access to Scheduler (#1448)

Signed-off-by: knlnguyen1802 <knlnguyen1802@gmail.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* [Doc][Test][Misc] ComfyUI test, more screenshot, and code cleaning (#1435)

Signed-off-by: Huang, Zeyu <11222265+fhfuih@users.noreply.github.com>
Signed-off-by: Samit <285365963@qq.com>
Co-authored-by: Samit <285365963@qq.com>

* [Performance]Qwen3-Omni performance optimization (#1378)

Signed-off-by: amy-why-3459 <wuhaiyan17@huawei.com>

* [Feature] Support HSDP for diffusion models (#1339)

Signed-off-by: gcanlin <canlinguosdu@gmail.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* [CI] fixed CI timeout (#1460)

Signed-off-by: zhumingjue <zhumingjue@huawei.com>
Signed-off-by: zhumingjue138 <zhumingjue@huawei.com>

* [Bugfix] Use uds for zmq address if not set --stage-id (#1522)

Signed-off-by: wuhang <wuhang6@huawei.com>

* [BugFix] Restore talker's config (#1524)

Signed-off-by: amy-why-3459 <wuhaiyan17@huawei.com>
Signed-off-by: Hongsheng Liu <liuhongsheng4@huawei.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>
Co-authored-by: Canlin Guo <961750412@qq.com>

* [XPU] fix qwen_omni after rebase to v0.16.0 (#1416)

Signed-off-by: Chendi Xue <chendi.xue@intel.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* [Platform] Enable layerwise offload on all hardware (#1492)

Signed-off-by: gcanlin <canlinguosdu@gmail.com>

* diffusion: enable VAE patch parallel for SD3.5 (#1428)

Signed-off-by: dongbo910220 <1275604947@qq.com>

* [Perf] GLM Image (#920)

Signed-off-by: JaredforReal <w13431838023@gmail.com>
Signed-off-by: Jared Wen <w13431838023@gmail.com>
Signed-off-by: Hongsheng Liu <liuhongsheng4@huawei.com>
Co-authored-by: 汪志鹏 <wangzhipeng628@gmail.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* [skip ci][Doc] add design docs for async chunk in qwen3-omni (#962)

Signed-off-by: Rein Yang <ruiruyang2@gmail.com>

* feat(qwen3-tts): Add CUDA Graph support for speech tokenizer decoder (#1205)

Signed-off-by: xulusjb <fdukeshik@gmail.com>
Co-authored-by: xulusjb <fdukeshik@gmail.com>

* [New Model]: XiaomiMiMo/MiMo-Audio-7B-Instruct support (#750)

Signed-off-by: wangyu31577 <wangyu31577@hundsun.com>
Signed-off-by: 齐保元 <qibaoyuan@xiaomi.com>
Signed-off-by: hsliu <liuhongsheng4@huawei.com>
Signed-off-by: Hongsheng Liu <liuhongsheng4@huawei.com>
Signed-off-by: GG-li <3226868735@qq.com>
Signed-off-by: Sihao Li <111170255+GG-li@users.noreply.github.com>
Signed-off-by: XU Mingshi <91017482+mxuax@users.noreply.github.com>
Signed-off-by: mxuax <mxuax@connect.ust.hk>
Signed-off-by: Baoyuan Qi <qibaoyuan@126.com>
Signed-off-by: gcanlin <canlinguosdu@gmail.com>
Signed-off-by: Isotr0py <mozf@mail2.sysu.edu.cn>
Signed-off-by: wuzhongjian <wuzhongjian_yewu@cmss.chinamobile.com>
Signed-off-by: dongbo910220 <1275604947@qq.com>
Signed-off-by: dongbo910220 <32610838+dongbo910220@users.noreply.github.com>
Signed-off-by: Jiangyun Zhu <riverclouds.zhu@qq.com>
Signed-off-by: Junhong Liu <98734602+LJH-LBJ@users.noreply.github.com>
Signed-off-by: tjtanaa <tunjian.tan@embeddedllm.com>
Signed-off-by: baoyuan qi <qibaoyuan@126.com>
Signed-off-by: tzhouam <tzhouam@connect.ust.hk>
Signed-off-by: zjy0516 <riverclouds.zhu@qq.com>
Signed-off-by: Prajwal A <prajwalanagani@gmail.com>
Signed-off-by: Shijin Zhang <75300765+Dovis01@users.noreply.github.com>
Signed-off-by: 丁宁 <nndding@gmail.com>
Signed-off-by: SHIJIN ZHANG <75300765+Dovis01@users.noreply.github.com>
Signed-off-by: dingning<dingning7@xiaomi.com>
Signed-off-by: dingning <dingning7@xiaomi.com>
Signed-off-by: dingning <dingning@xiaomi.com>
Co-authored-by: wangyu <53896905+yenuo26@users.noreply.github.com>
Co-authored-by: wangyu31577 <wangyu31577@hundsun.com>
Co-authored-by: Zhang Shijin <zhangshijin@xiaomi.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>
Co-authored-by: Sihao Li <111170255+GG-li@users.noreply.github.com>
Co-authored-by: XU Mingshi <91017482+mxuax@users.noreply.github.com>
Co-authored-by: Canlin Guo <canlinguosdu@gmail.com>
Co-authored-by: Isotr0py <mozf@mail2.sysu.edu.cn>
Co-authored-by: JohnJan <wuzhongjian_yewu@cmss.chinamobile.com>
Co-authored-by: WeiQing Chen <40507679+david6666666@users.noreply.github.com>
Co-authored-by: dongbo910220 <32610838+dongbo910220@users.noreply.github.com>
Co-authored-by: Jiangyun Zhu <riverclouds.zhu@qq.com>
Co-authored-by: Junhong Liu <ljh_lbj@163.com>
Co-authored-by: TJian <tunjian.tan@embeddedllm.com>
Co-authored-by: shijin zhang <zsj1364226740@gmail.com>
Co-authored-by: Zhou Taichang <tzhouam@connect.ust.hk>
Co-authored-by: root <root@hk01dgx028.cm.cluster>
Co-authored-by: Prajwal A <34590600+LawJarp-A@users.noreply.github.com>
Co-authored-by: Shijin Zhang <75300765+Dovis01@users.noreply.github.com>
Co-authored-by: dingning <dingning7@xiaomi.com>
Co-authored-by: ning ding <nndding@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* [Feature]: Native GGUF Quantization Support for DiT (#1285)

Signed-off-by: David Chen <530634352@qq.com>
Signed-off-by: Isotr0py <mozf@mail2.sysu.edu.cn>
Signed-off-by: WeiQing Chen <40507679+david6666666@users.noreply.github.com>
Signed-off-by: Hongsheng Liu <liuhongsheng4@huawei.com>
Co-authored-by: Isotr0py <mozf@mail2.sysu.edu.cn>
Co-authored-by: Isotr0py <2037008807@qq.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* Add benchmark for `v1/audio/speech` non-streaming (#1408)

Signed-off-by: Ekagra Ranjan <3116519+ekagra-ranjan@users.noreply.github.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* [Version] Auto generate version using `setuptool_scm` (#1224)

Signed-off-by: tjtanaa <tunjian.tan@embeddedllm.com>

* [Feat] : Support Async chunk cleanup (#1087)

Signed-off-by: Sy03 <1370724210@qq.com>

* [Profiler] Support online profiling (#1136)

Signed-off-by: gcanlin <canlinguosdu@gmail.com>
Signed-off-by: Canlin Guo <961750412@qq.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>
Co-authored-by: Nicolò Lucchesi <nicolo.lucchesi@gmail.com>

* [Bugfix] Fix redundant finished req status updating on OmniGenerationScheduler (#1510)

Signed-off-by: shijin zhang <75300765+Dovis01@users.noreply.github.com>
Co-authored-by: 齐保元 <qibaoyuan@xiaomi.com>

* [XPU][NPU][ROCM] enable cpu_offloading flag for non_cuda (#1488)

Signed-off-by: gcanlin <canlinguosdu@gmail.com>
Signed-off-by: Chendi Xue <chendi.xue@intel.com>
Co-authored-by: gcanlin <canlinguosdu@gmail.com>

* [Chore] Cleanup dead code in GGUF DiT code path (#1533)

Signed-off-by: Isotr0py <mozf@mail2.sysu.edu.cn>

* [Doc] Update installation instructions for vllm 0.16.0 (#1505)

Signed-off-by: tzhouam <tzhouam@connect.ust.hk>

* [Doc] [skip ci]Sync. (#1363)

Signed-off-by: Alicia <115451386+congw729@users.noreply.github.com>
Co-authored-by: Yueqian Lin <70319226+linyueqian@users.noreply.github.com>

* [CI][skip ci]Update H100 image link based on #1518 (#1538)

Signed-off-by: Alicia <115451386+congw729@users.noreply.github.com>

* Fix no embed text spk tokens (#1540)

Signed-off-by: Junhong Liu <98734602+LJH-LBJ@users.noreply.github.com>

* [Debug] Merge vllm pull 35368 (#1534)

Signed-off-by: tzhouam <tzhouam@connect.ust.hk>

* [Docs] update async chunk docs diagram [skip ci] (#1530)

Signed-off-by: Rein Yang <ruiruyang2@gmail.com>

* fix(qwen3-tts): fix Base ICL voice clone producing corrupted audio (#1554)

Signed-off-by: linyueqian <linyueqian@outlook.com>

* [NPU][Bugfix] Align GPU side and recover qwen3-tts (#1564)

Signed-off-by: gcanlin <canlinguosdu@gmail.com>

* [BugFix] Fix unexpected crash when init OmniDiffusion (#1562)

Signed-off-by: Semmer2 <semmer@live.cn>

* [CI] Modify some CI test cases to run on L4 environment to reduce H100 resource usage. (#1543)

Signed-off-by: yenuo26 <410167048@qq.com>
Signed-off-by: wangyu <53896905+yenuo26@users.noreply.github.com>

* [BugFix]: fix a lot of bug (#1565)

Signed-off-by: princepride <wangzhipeng628@gmail.com>

* feat: add HyperCLOVAX-SEED-Omni-8B support

Model files:
- vllm_omni/diffusion/models/hyperclovax_vision/: vision decoder pipeline
  (HyperCLOVAXVisionPipeline) using flow matching diffusion + VisionTransformer
- vllm_omni/diffusion/models/hyperclovax_audio/: audio decoder pipeline
  (HyperCLOVAXAudioPipeline) using Unit-BigVGAN codec
- vllm_omni/model_executor/stage_input_processors/hyperclovax_seed_omni.py:
  thinker2vision_decoder and thinker2audio_decoder — extract discrete tokens from
  LLM output; truncate/pad vision codes to 729 (27x27) for decoder

Registry:
- vllm_omni/diffusion/registry.py: register HyperCLOVAXVisionPipeline and
  HyperCLOVAXAudioPipeline with post-process functions

Stage config:
- vllm_omni/model_executor/stage_configs/hcx_omni.yaml: 3-stage config
  Stage 0: LLM thinker (TP=4, GPUs 0-3), Stage 1: vision decoder (GPU 4),
  Stage 2: audio decoder (GPU 5)

Bug fixes for HyperCLOVAX compatibility:
- diffusion/request.py: add extra dict field to OmniDiffusionRequest so
  vision_tokens/audio_tokens from stage input processors reach the pipeline
- entrypoints/async_omni_diffusion.py: extract OmniTokensPrompt.additional_information
  into OmniDiffusionRequest.extra before creating request
- entrypoints/omni_stage.py: skip empty engine inputs (text-only requests where
  thinker2vision_decoder/thinker2audio_decoder return [])
- entrypoints/async_omni.py: handle skipped sentinel in _process_single_result
  so text-only requests complete without crashing on Stage 1/2

* fix: correct decoder params and HCX porting fixes

- hcx_omni.yaml: guidance_scale 3.5→0.75, num_inference_steps 30→50
  (matches OmniServe production defaults; 3.5 caused over-amplified
  autoguidance → shrunken/degraded output images)
- omni_stage.py: skip empty engine inputs for text-only requests
- async_omni_diffusion.py: extract OmniTokensPrompt.additional_information
  into OmniDiffusionRequest.extra (audio_tokens/vision_tokens)
- registry.py: HCX Omni diffusion model registration fix

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* feat: HyperCLOVAX-SEED-Omni-8B stage pipeline and entrypoint fixes

* fix: change guidance_scale from 9.0 to 0.75 (autoguidance scale, OmniServe default)

* feat: add audio decoder Stage 2 to hcx_omni pipeline

- Wire HyperCLOVAXAudioPipeline as Stage 2 in hcx_omni.yaml
- GPU 5 assigned for audio decoder (Unit-BigVGAN / NCCosybigvganDecoder)
- Add runtime edge 0->2 (thinker -> audio decoder)
- Implement post-generation PCM chunk streaming for audio output
  (4800 samples / 200ms per SSE event @ 24kHz, int16 base64-encoded)

Refs: github.com/vllm-project/vllm-omni/pull/869 (already incorporated)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix: vllm version compatibility for HyperCLOVAX audio decoder startup

- config/model.py: try/except fallback for AttentionBackendEnum import
  (vllm.v1.attention.backends.registry absent in older vllm builds)
- pipeline_hyperclovax_audio.py: return actual named_parameters() from
  load_weights() when using MAR checkpoint so diffusers_loader strict
  check passes (weights loaded eagerly in __init__ via MAR extraction)
- qwen3_omni_moe_thinker.py, qwen2_5_omni_thinker.py: try/except stubs
  for check_interleaved_audio_video and merge_interleaved_embeddings
  which are absent in older vllm qwen2_5_omni_thinker; these symbols
  are only exercised by Qwen models, not HyperCLOVAX

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix: add edge 1→2 and correct model key in hcx_omni.yaml Stage 2

- Add runtime edge from:1 to:2 (required for Stage-2 connector init;
  without it AsyncOrchestrator cannot route to audio decoder at runtime)
- Change model_subdir to model for Stage-2 engine_args to match
  total-poc working reference config

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix: audio S2S output - handle diffusion outputs in _create_audio_choice

HyperCLOVAXAudioPipeline (diffusion) stores audio in multimodal_output
directly (OmniRequestOutput.from_diffusion), not in outputs[0].multimodal_output
like LLM pipelines. Fix three locations:

1. _create_audio_choice (non-streaming): use omni_outputs.multimodal_output
   when final_res.outputs is empty (diffusion path).
2. Streaming audio path: same fix for _final_res.outputs[0].
3. Both loops (for output in final_res.outputs): fall back to single
   synthetic choice at index 0 when outputs list is empty.
4. Handle bytes audio output from HyperCLOVAXAudioPipeline post-process
   (returns WAV bytes, not tensors like Qwen3-Omni).

Also fixes audio input (A2T) regression: skip diffusion prompt extraction
when mm_data has audio content (added in previous session).

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix: parse WAV bytes with soundfile for uniform PCM chunk streaming

HyperCLOVAXAudioPipeline returns WAV bytes including 44-byte header.
The previous byte-offset splitting included the header in the first
chunk, corrupting it. Fix: parse with soundfile to get float32 PCM,
then convert to int16 chunks uniformly regardless of source type
(bytes or tensor).

Verified: 136 audio chunks x 200ms = 27.04s audio streamed correctly.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* feat: zero-shot TTS with speaker embedding from input audio

- serving_chat.py: extract last input_audio base64 from request messages
  and inject as ref_audio_b64 into engine_prompt dict
- thinker2audio_decoder: read ref_audio_b64 from prompt and pass as
  ref_audio_tokens to Stage 2 (HyperCLOVAXAudioPipeline)
- hcx_omni.yaml: switch Stage 2 to NCZSCosybigvganDecoder.mar (zero-shot)
  which uses ECAPA-TDNN speaker encoder instead of finetuned ID lookup

Pipeline: input audio -> ECAPA-TDNN -> speaker embedding -> BigVGAN synthesis
matching the voice characteristics of the original speaker.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* feat: wire audio decoder Stage 2 to hcx_omni pipeline and fix S2S flow

- Add Stage 2 (HyperCLOVAXAudioPipeline / NCZSCosybigvganDecoder) to hcx_omni.yaml
  with GPU 5, gpu_memory_utilization 0.4, edge 0->2 from thinker
- Fix thinker2audio_decoder: correct audio token range (128606-135167),
  remap to [0, 6561) for BigVGAN input, handle empty token case gracefully
- Fix pipeline_hyperclovax_audio.py post_process_func signature and
  incorporate PR#869 BUG FIX patches for stable audio generation

* fix: use finetuned audio decoder and fix transformers_modules deserialization

- hcx_omni.yaml: switch Stage 2 from NCZSCosybigvganDecoder (zero-shot,
  ECAPA-TDNN) to NCCosybigvganDecoder (finetuned, nn.Embedding speaker id).
  Zero-shot decoder required ref_audio (mel spectrogram) which is unavailable
  for text-only requests and incompatible with finetuned decoder path.

- pipeline_hyperclovax_audio.py: guard ref_audio processing with
  'not self.bigvgan.finetune' — finetuned decoder has no ECAPA-TDNN encoder,
  so passing ref_audio bytes would crash with 'expected 100 channels'.

- omni_stage.py: add HuggingFace modules cache (~/.cache/huggingface/modules)
  to sys.path before queue.get_nowait() in try_collect(). Stage-0 pickles
  outputs containing custom classes from transformers_modules (trust_remote_code),
  but the API server process doesn't have this path, causing deserialization
  failures that silently drop Stage-0 outputs.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix: restore zero-shot speaker cloning with fallback for text-only requests

- hcx_omni.yaml: revert to NCZSCosybigvganDecoder.mar (zero-shot ECAPA-TDNN)
  for voice-preserving S2S synthesis. NCCosybigvganDecoder used a fixed
  integer speaker_id and lost the input speaker's voice.

- pipeline_hyperclovax_audio.py: add zero-mel fallback branch for
  finetune=False + ref_audio=None case. When a text-only request arrives
  (no input audio → no ref_audio), ECAPA-TDNN receives a zero mel tensor
  [1, num_mels, 64] instead of crashing with 'expected 100 channels'.
  S2S requests always have ref_audio so the zero-shot cloning path is
  unchanged.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* feat: add stage config yaml for HCX audio decoder

Signed-off-by: Hyunjoon Jeong <hyunjoon.jeong@navercorp.com>

* feat: add HyperCLOVAX-SEED-Omni 8B model as vllm-omni executor

Signed-off-by: Hyunjoon Jeong <hyunjoon.jeong@navercorp.com>

* feat: add HCX audio decoder pipeline

Signed-off-by: Hyunjoon Jeong <hyunjoon.jeong@navercorp.com>

* fix: modify exception for HCX audio decoder (GAN)

Signed-off-by: Hyunjoon Jeong <hyunjoon.jeong@navercorp.com>

* fix: default temperature set to 0, and pipeline model evaluation mode

Signed-off-by: Hyunjoon Jeong <hyunjoon.jeong@navercorp.com>

---------

Signed-off-by: Huang, Zeyu <11222265+fhfuih@users.noreply.github.com>
Signed-off-by: dengyunyang <584797741@qq.com>
Signed-off-by: gcanlin <canlinguosdu@gmail.com>
Signed-off-by: samithuang <285365963@qq.com>
Signed-off-by: Samit <285365963@qq.com>
Signed-off-by: lishunyang <lishunyang12@163.com>
Signed-off-by: Hongsheng Liu <liuhongsheng4@huawei.com>
Signed-off-by: princepride <wangzhipeng628@gmail.com>
Signed-off-by: 汪志鹏 <wangzhipeng628@gmail.com>
Signed-off-by: wangyu31577 <wangyu31577@hundsun.com>
Signed-off-by: Kyle Huang <yellowsea@gmail.com>
Signed-off-by: zjy0516 <riverclouds.zhu@qq.com>
Signed-off-by: tjtanaa <tunjian.tan@embeddedllm.com>
Signed-off-by: natureofnature <wzliu@connect.hku.hk>
Signed-off-by: linyueqian <linyueqian@outlook.com>
Signed-off-by: mxuax <mxuax@connect.ust.hk>
Signed-off-by: XU Mingshi <91017482+mxuax@users.noreply.github.com>
Signed-off-by: amy-why-3459 <wuhaiyan17@huawei.com>
Signed-off-by: wuzhongjian wuzhongjian_yewu@cmss.chinamobile.com
Signed-off-by: ZeldaHuang <hzm414167@alibaba-inc.com>
Signed-off-by: Divyansh Singhvi <divyanshsinghvi@gmail.com>
Signed-off-by: Lin, Fanli <fanli.lin@intel.com>
Signed-off-by: Fanli Lin <fanli.lin@intel.com>
Signed-off-by: Fanli Lin <fanli0116@gmail.com>
Signed-off-by: dongbo910220 <1275604947@qq.com>
Signed-off-by: Ding Zuhao <e1583181@u.nus.edu>
Signed-off-by: jzz <e1583181@u.nus.edu>
Signed-off-by: Andy Zhou <46011930+AndyZhou952@users.noreply.github.com>
Signed-off-by: wangyu <53896905+yenuo26@users.noreply.github.com>
Signed-off-by: Pierre Le Guen <26087574+PierreLeGuen@users.noreply.github.com>
Signed-off-by: yuanheng <jonathan.zhaoyh@gmail.com>
Signed-off-by: ram16g <anlianfengjie@163.com>
Signed-off-by: Didan Deng <33117903+wtomin@users.noreply.github.com>
Signed-off-by: pablo <juanz9312@gmail.com>
Signed-off-by: Roger Wang <hey@rogerw.io>
Signed-off-by: anna <lee.anna@navercorp.com>
Signed-off-by: Rustam Khadipash <16683750+hadipash@users.noreply.github.com>
Signed-off-by: Alicia <115451386+congw729@users.noreply.github.com>
Signed-off-by: Taichang Zhou <tzhouam@connect.ust.hk>
Signed-off-by: tzhouam <tzhouam@connect.ust.hk>
Signed-off-by: hsliu <liuhongsheng4@huawei.com>
Signed-off-by: hsliu_ustc <hsliu_ustc@noreply.gitcode.com>
Signed-off-by: zhenwei-intel <zhenwei.liu@intel.com>
Signed-off-by: Yuanheng Zhao <jonathan.zhaoyh@gmail.com>
Signed-off-by: erfgss <97771661+erfgss@users.noreply.github.com>
Signed-off-by: xiedeyantu <czjourney@163.com>
Signed-off-by: Junhong Liu <98734602+LJH-LBJ@users.noreply.github.com>
Signed-off-by: Junhong Liu <ljh_lbj@163.com>
Signed-off-by: David Chen <530634352@qq.com>
Signed-off-by: weichen <calvin_zhu0210@outlook.com>
Signed-off-by: Yan Ma <yan.ma@intel.com>
Signed-off-by: ApsarasX <apsarax@outlook.com>
Signed-off-by: Chenguang ZHENG <645327136@qq.com>
Signed-off-by: yenuo26 <410167048@qq.com>
Signed-off-by: Semmer2 <semmer@live.cn>
Signed-off-by: Yueqian Lin <70319226+linyueqian@users.noreply.github.com>
Signed-off-by: zhou zhuoxin <zhouzhuoxin1508@outlook.com>
Signed-off-by: Gao Han <hgaoaf@connect.ust.hk>
Signed-off-by: Rein Yang <ruiruyang2@gmail.com>
Signed-off-by: Ekagra Ranjan <3116519+ekagra-ranjan@users.noreply.github.com>
Signed-off-by: SYLAR <125541396+lishunyang12@users.noreply.github.com>
Signed-off-by: Ziming Huang <1520787127@qq.com>
Signed-off-by: SamitHuang <285365963@qq.com>
Signed-off-by: GG-li <3226868735@qq.com>
Signed-off-by: CHEN <116010019@link.cuhk.edu.cn>
Signed-off-by: John Liu BUAA <liukecheng97@gmail.com>
Signed-off-by: knlnguyen1802 <knlnguyen1802@gmail.com>
Signed-off-by: dsinghvi <divyanshsinghvi@gmail.com>
Signed-off-by: Chendi Xue <chendi.xue@intel.com>
Signed-off-by: Daniel Huang <daniel1.huang@intel.com>
Signed-off-by: Alex Brooks <albrooks@redhat.com>
Signed-off-by: Sy03 <1370724210@qq.com>
Signed-off-by: Claude Opus 4.6 <noreply@anthropic.com>
Signed-off-by: UsamaKenway <usamakenway@gmail.com>
Signed-off-by: Hunter Liu <hunter@liu.sh>
Signed-off-by: NickLucche <nlucches@redhat.com>
Signed-off-by: wuhang <wuhang6@huawei.com>
Signed-off-by: wuhang <whlbx@hotmail.com>
Signed-off-by: pablo <pablo@agigo.ai>
Signed-off-by: Shijin Zhang <75300765+Dovis01@users.noreply.github.com>
Signed-off-by: Dong Wang <dongw2019@gmail.com>
Signed-off-by: sniper35 <dongw2019@gmail.com>
Signed-off-by: Yanick Schraner <yanick.schraner@bs.ch>
Signed-off-by: Sangchun Ha <seomk9896@gmail.com>
Signed-off-by: jader <yjader@foxmail.com>
Signed-off-by: junuxyz <216036880+junuxyz@users.noreply.github.com>
Signed-off-by: AndyZhou952 <jzhoubc@connect.ust.hk>
Signed-off-by: Yupu <feng.yu.pu0330@gmail.com>
Signed-off-by: Kevin H. Luu <khluu000@gmail.com>
Signed-off-by: zhumingjue <zhumingjue@huawei.com>
Signed-off-by: zhumingjue138 <zhumingjue@huawei.com>
Signed-off-by: JaredforReal <w13431838023@gmail.com>
Signed-off-by: Jared Wen <w13431838023@gmail.com>
Signed-off-by: xulusjb <fdukeshik@gmail.com>
Signed-off-by: 齐保元 <qibaoyuan@xiaomi.com>
Signed-off-by: Sihao Li <111170255+GG-li@users.noreply.github.com>
Signed-off-by: Baoyuan Qi <qibaoyuan@126.com>
Signed-off-by: Isotr0py <mozf@mail2.sysu.edu.cn>
Signed-off-by: wuzhongjian <wuzhongjian_yewu@cmss.chinamobile.com>
Signed-off-by: dongbo910220 <32610838+dongbo910220@users.noreply.github.com>
Signed-off-by: Jiangyun Zhu <riverclouds.zhu@qq.com>
Signed-off-by: baoyuan qi <qibaoyuan@126.com>
Signed-off-by: Prajwal A <prajwalanagani@gmail.com>
Signed-off-by: 丁宁 <nndding@gmail.com>
Signed-off-by: SHIJIN ZHANG <75300765+Dovis01@users.noreply.github.com>
Signed-off-by: dingning<dingning7@xiaomi.com>
Signed-off-by: dingning <dingning7@xiaomi.com>
Signed-off-by: dingning <dingning@xiaomi.com>
Signed-off-by: WeiQing Chen <40507679+david6666666@users.noreply.github.com>
Signed-off-by: Canlin Guo <961750412@qq.com>
Signed-off-by: shijin zhang <75300765+Dovis01@users.noreply.github.com>
Signed-off-by: Hyunjoon Jeong <hyunjoon.jeong@navercorp.com>
Signed-off-by: Hyunjoon Jeong <with1015@unist.ac.kr>
Co-authored-by: Zeyu Huang | 黃澤宇 <11222265+fhfuih@users.noreply.github.com>
Co-authored-by: JohnJan <wuzhongjian_yewu@cmss.chinamobile.com>
Co-authored-by: dengyunyang <584797741@qq.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>
Co-authored-by: Canlin Guo <canlinguosdu@gmail.com>
Co-authored-by: Samit <285365963@qq.com>
Co-authored-by: SYLAR <125541396+lishunyang12@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: 汪志鹏 <wangzhipeng628@gmail.com>
Co-authored-by: wangyu <53896905+yenuo26@users.noreply.github.com>
Co-authored-by: wangyu31577 <wangyu31577@hundsun.com>
Co-authored-by: kYLe <yellowsea@gmail.com>
Co-authored-by: Jiangyun Zhu <riverclouds.zhu@qq.com>
Co-authored-by: TJian <tunjian.tan@embeddedllm.com>
Co-authored-by: NATURE <wzliu@connect.hku.hk>
Co-authored-by: Yueqian Lin <70319226+linyueqian@users.noreply.github.com>
Co-authored-by: Zhou Taichang <tzhouam@connect.ust.hk>
Co-authored-by: root <root@hk01dgx028.cm.cluster>
Co-authored-by: XU Mingshi <91017482+mxuax@users.noreply.github.com>
Co-authored-by: amy-why-3459 <wuhaiyan17@huawei.com>
Co-authored-by: Rein Yang <ruiruyang2@gmail.com>
Co-authored-by: Ziming Huang <hzm414167@alibaba-inc.com>
Co-authored-by: dsinghvi <divyanshsinghvi@gmail.com>
Co-authored-by: Fanli Lin <fanli.lin@intel.com>
Co-authored-by: dongbo910220 <32610838+dongbo910220@users.noreply.github.com>
Co-authored-by: Ding Zuhao <e1583181@u.nus.edu>
Co-authored-by: Andy Zhou <46011930+AndyZhou952@users.noreply.github.com>
Co-authored-by: Pierre LE GUEN <26087574+PierreLeGuen@users.noreply.github.com>
Co-authored-by: WeiQing Chen <40507679+david6666666@users.noreply.github.com>
Co-authored-by: Yuanheng Zhao <54058983+yuanheng-zhao@users.noreply.github.com>
Co-authored-by: ram16g <anlianfengjie@163.com>
Co-authored-by: Didan Deng <33117903+wtomin@users.noreply.github.com>
Co-authored-by: Markus / Mark <46672778+marksverdhei@users.noreply.github.com>
Co-authored-by: Juan Pablo Zuluaga <46724788+JuanPZuluaga@users.noreply.github.com>
Co-authored-by: muziyuhui666 <111362884+muziyuhui666@users.noreply.github.com>
Co-authored-by: Roger Wang <hey@rogerw.io>
Co-authored-by: ceanna93 <fairyanna@naver.com>
Co-authored-by: anna <lee.anna@navercorp.com>
Co-authored-by: Rustam Khadipash <16683750+hadipash@users.noreply.github.com>
Co-authored-by: Alicia <115451386+congw729@users.noreply.github.com>
Co-authored-by: hsliu_ustc <hsliu_ustc@noreply.gitcode.com>
Co-authored-by: liuzhenwei <zhenweiliu@habana.ai>
Co-authored-by: erfgss <97771661+erfgss@users.noreply.github.com>
Co-authored-by: Jensen <czjourney@163.com>
Co-authored-by: Junhong Liu <ljh_lbj@163.com>
Co-authored-by: weichen <calvin_zhu0210@outlook.com>
Co-authored-by: PopSoda2002 <zhouhp.me@gmail.com>
Co-authored-by: Yan Ma <yan.ma@intel.com>
Co-authored-by: ApsarasX <apsarax@outlook.com>
Co-authored-by: Chenguang Zheng <645327136@qq.com>
Co-authored-by: Jiaping Wu <53215702+ElleElleWu@users.noreply.github.com>
Co-authored-by: zhou zhuoxin <zhouzhuoxin1508@outlook.com>
Co-authored-by: Gao Han <gaohan19@huawei.com>
Co-authored-by: rein yang <73573651+R2-Y@users.noreply.github.com>
Co-authored-by: Ekagra Ranjan <3116519+ekagra-ranjan@users.noreply.github.com>
Co-authored-by: Flora Feng <4florafeng@gmail.com>
Co-authored-by: Sihao Li <111170255+GG-li@users.noreply.github.com>
Co-authored-by: ChenWenjing <54166744+Shirley125@users.noreply.github.com>
Co-authored-by: Bhanu068 <voutharoja.bhanu06@gmail.com>
Co-authored-by: John Liu BUAA <liukecheng97@gmail.com>
Co-authored-by: yenuo26 <410167048@qq.com>
Co-authored-by: knlnguyen1802 <knlnguyen1802@gmail.com>
Co-authored-by: liuzhenwei <zhenwei.liu@intel.com>
Co-authored-by: Isotr0py <Isotr0py@outlook.com>
Co-authored-by: ZJY0516 <zhu.jiangyun@foxmail.com>
Co-authored-by: youkaichao <youkaichao@gmail.com>
Co-authored-by: Chendi.Xue <chendi.xue@intel.com>
Co-authored-by: Daniel Huang <daniel1.huang@intel.com>
Co-authored-by: Alex Brooks <albrooks@redhat.com>
Co-authored-by: Sy03 <1370724210@qq.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: UsamaKenway <56207634+UsamaKenway@users.noreply.github.com>
Co-authored-by: Nicolò Lucchesi <nlucches@redhat.com>
Co-authored-by: wuhang <wuhang6@huawei.com>
Co-authored-by: pablo <pablo@agigo.ai>
Co-authored-by: SHIJIN ZHANG <75300765+Dovis01@users.noreply.github.com>
Co-authored-by: Dong W <89223086+sniper35@users.noreply.github.com>
Co-authored-by: Yanick Schraner <yanick.schraner@gmail.com>
Co-authored-by: Sangchun Ha <seomk9896@naver.com>
Co-authored-by: 亦瑾 <76905040+yJader@users.noreply.github.com>
Co-authored-by: junuxyz <216036880+junuxyz@users.noreply.github.com>
Co-authored-by: Yupu <feng.yu.pu0330@gmail.com>
Co-authored-by: Kevin H. Luu <khluu000@gmail.com>
Co-authored-by: zhumingjue138 <zhumingjue@huawei.com>
Co-authored-by: Canlin Guo <961750412@qq.com>
Co-authored-by: Jared Wen <w13431838023@gmail.com>
Co-authored-by: Xu Lu <572605156@qq.com>
Co-authored-by: xulusjb <fdukeshik@gmail.com>
Co-authored-by: Baoyuan Qi <qibaoyuan@xiaomi.com>
Co-authored-by: Zhang Shijin <zhangshijin@xiaomi.com>
Co-authored-by: Isotr0py <mozf@mail2.sysu.edu.cn>
Co-authored-by: shijin zhang <zsj1364226740@gmail.com>
Co-authored-by: Prajwal A <34590600+LawJarp-A@users.noreply.github.com>
Co-authored-by: dingning <dingning7@xiaomi.com>
Co-authored-by: ning ding <nndding@gmail.com>
Co-authored-by: Isotr0py <2037008807@qq.com>
Co-authored-by: Nicolò Lucchesi <nicolo.lucchesi@gmail.com>
Co-authored-by: Ting FU <futing10@huawei.com>
Co-authored-by: developer-account <irteam@vllm-omni-dev-0.vllm-omni-dev.p-nb13557.svc.cluster.local>
Co-authored-by: Hyunjoon Jeong <hyunjoon.jeong@navercorp.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready label to trigger buildkite CI

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants