Skip to content

[Perf]: CFG parallel abstraction#851

Merged
hsliuustc0106 merged 68 commits intovllm-project:mainfrom
wtomin:cfg-base-pipeline
Jan 30, 2026
Merged

[Perf]: CFG parallel abstraction#851
hsliuustc0106 merged 68 commits intovllm-project:mainfrom
wtomin:cfg-base-pipeline

Conversation

@wtomin
Copy link
Copy Markdown
Collaborator

@wtomin wtomin commented Jan 19, 2026

Purpose

As the PR for RFC #850 , it tries to implement the CFG parallelization abstraction for diffusion pipelines in vLLM-Omni via CFGParallelMixin.

CFGParallelMixin is a shared abstraction that enables diffusion pipelines to perform classifier-free guidance (CFG) either sequentially (single process) or in parallel across a dedicated CFG process group (rank-split conditional/unconditional forward passes).

See QwenImageCFGParallelMixin.diffuse as an example.

Test Plan

  • Unit test
pytest -s -v tests/diffusion/distributed/test_cfg_parallel.py
  1. major test test_predict_noise_maybe_with_cfg
    Purpose: Verifies that CFG parallel produces numerically identical results compared to sequential CFG execution.

  2. test_predict_noise_without_cfg
    Purpose: Tests the case when CFG is disabled (do_true_cfg=False).

  • image generation

Five models are tested: FLUX.2-KLEIN-4B, LONGCAT-IMAGE, OVIS-IMAGE, QWEN-IMAGE, STABLE-DIFFUSION-3

The bash script to run all t2i tasks
#!/bin/bash

# Script to run text-to-image inference for all supported models
# Comparing with and without CFG parallel
# Logs are saved to individual txt files for each experiment
# If one task fails, other tasks will continue to run

PROMPT="a lovely bunny holding a sign that says 'vllm-omni'"
NEGATIVE_PROMPT="ugly, unclear, blurry, gray"

# Arrays to track success and failure
declare -a SUCCESS_TASKS
declare -a FAILED_TASKS

# Define models and their parameters
# Format: "model_name|model_path|scale_arg|scale_value"
declare -a MODELS=(
  "Qwen-Image|Qwen/Qwen-Image|cfg_scale|4.0"
  "FLUX.2-klein-4B|black-forest-labs/FLUX.2-klein-4B|guidance_scale|4.0"
  "LongCat-Image|meituan-longcat/LongCat-Image|guidance_scale|4.0"
  "Ovis-Image|AIDC-AI/Ovis-Image-7B|guidance_scale|4.0"
  "Stable-Diffusion-3|stabilityai/stable-diffusion-3.5-medium|guidance_scale|4.0"
)

# Eager mode configurations
declare -a EAGER_CONFIGS=(
  "no_eager|"
  "with_eager|--enforce_eager"
)

# CFG parallel configurations
declare -a CFG_CONFIGS=(
  "no_cfg_parallel|"
  "with_cfg_parallel|--cfg_parallel_size 2"
)

echo "=========================================="
echo "Starting text-to-image inference tests"
echo "Testing combinations of eager mode and CFG parallel"
echo "4 test cases per model:"
echo "  1. no_eager + no_cfg_parallel"
echo "  2. no_eager + with_cfg_parallel"
echo "  3. with_eager + no_cfg_parallel"
echo "  4. with_eager + with_cfg_parallel"
echo "Each model's outputs saved in its own directory"
echo "Note: If one task fails, others will continue"
echo "=========================================="
echo ""

TASK_NUM=0
TOTAL_TASKS=$((${#MODELS[@]} * ${#EAGER_CONFIGS[@]} * ${#CFG_CONFIGS[@]}))

# Run experiments for each model and configuration
for model_info in "${MODELS[@]}"; do
  IFS='|' read -r model_name model_path scale_arg scale_value <<< "$model_info"
  
  # Create directory for this model
  model_dir="${model_name// /_}"
  mkdir -p "$model_dir"
  
  for eager_info in "${EAGER_CONFIGS[@]}"; do
    IFS='|' read -r eager_label eager_args <<< "$eager_info"
    
    for cfg_info in "${CFG_CONFIGS[@]}"; do
      IFS='|' read -r cfg_label cfg_args <<< "$cfg_info"
      TASK_NUM=$((TASK_NUM + 1))
      
      # Generate filenames inside model directory
      base_name="${model_name,,}"
      base_name="${base_name// /_}"
      output_file="$model_dir/${base_name}_output_${eager_label}_${cfg_label}.png"
      log_file="$model_dir/${base_name}_${eager_label}_${cfg_label}.log"
      task_label="$model_name ($eager_label + $cfg_label)"
      
      echo "=========================================="
      echo "$TASK_NUM/$TOTAL_TASKS: Running $task_label..."
      echo "=========================================="
      
      # Build and execute command
      if python examples/offline_inference/text_to_image/text_to_image.py \
        --model "$model_path" \
        --${scale_arg} "$scale_value" \
        --prompt "$PROMPT" \
        --negative_prompt "$NEGATIVE_PROMPT" \
        --output "$output_file" \
        $eager_args \
        $cfg_args \
        2>&1 | tee "$log_file"; then
        echo "✓ $task_label completed."
        SUCCESS_TASKS+=("$task_label")
      else
        echo "✗ $task_label FAILED."
        FAILED_TASKS+=("$task_label")
      fi
      echo ""
    done
  done
done

echo "=========================================="
echo "All tasks completed!"
echo "=========================================="
echo "Summary: ${#SUCCESS_TASKS[@]}/$TOTAL_TASKS successful, ${#FAILED_TASKS[@]}/$TOTAL_TASKS failed"
echo ""

if [ ${#SUCCESS_TASKS[@]} -gt 0 ]; then
  echo "✓ Successful tasks:"
  for task in "${SUCCESS_TASKS[@]}"; do
    echo "  - $task"
  done
  echo ""
fi

if [ ${#FAILED_TASKS[@]} -gt 0 ]; then
  echo "✗ Failed tasks:"
  for task in "${FAILED_TASKS[@]}"; do
    echo "  - $task"
  done
  echo ""
  echo "Check model directories for error logs."
  echo ""
fi

echo "Output directories:"
for model_info in "${MODELS[@]}"; do
  IFS='|' read -r model_name _ _ _ <<< "$model_info"
  model_dir="${model_name// /_}"
  echo "  - $model_dir/ (images and logs for $model_name)"
done

# Exit with error code if any tasks failed
if [ ${#FAILED_TASKS[@]} -gt 0 ]; then
  exit 1
fi

  • image edit

Four models are tested: Qwen-Image-Edit, Qwen-Image-Edit-2509, Qwen-Image-Layered, LongCat-Image-Edit
(Because of #1002 , Qwen-Image-Layered failed with shape error)

The bash script to run all image edit tasks
#!/bin/bash

# Script to run image-to-image (image edit) inference for all supported models
# Comparing with and without CFG parallel
# Logs are saved to individual txt files for each experiment
# If one task fails, other tasks will continue to run

PROMPT="turn this bunny to a cat"
NEGATIVE_PROMPT="ugly, unclear, blurry, gray"
NUM_INFERENCE_STEPS=50

# Arrays to track success and failure
declare -a SUCCESS_TASKS
declare -a FAILED_TASKS

# Define models and their parameters
# Format: "model_name|model_path|input_image|output_prefix|scale_arg|scale_value|extra_args"
declare -a MODELS=(
  "Qwen-Image-Edit|Qwen/Qwen-Image-Edit|./Qwen-Image/qwen-image_output_no_eager_no_cfg_parallel.png|output_image_edit|cfg_scale|4.0|"
  "Qwen-Image-Edit-2509|Qwen/Qwen-Image-Edit-2509|./Qwen-Image/qwen-image_output_no_eager_no_cfg_parallel.png|output_image_edit|cfg_scale|4.0|"
  "Qwen-Image-Layered|Qwen/Qwen-Image-Layered|./Qwen-Image/qwen-image_output_no_eager_no_cfg_parallel.png|layered|cfg_scale|4.0|--color-format RGBA --output layered --layers 2"
  "LongCat-Image-Edit|meituan-longcat/LongCat-Image-Edit|./LongCat-Image/longcat-image_output_no_eager_no_cfg_parallel.png|output_image_edit|guidance_scale|4.0|"
)

# Eager mode configurations
declare -a EAGER_CONFIGS=(
  "no_eager|"
  "with_eager|--enforce_eager"
)

# CFG parallel configurations
declare -a CFG_CONFIGS=(
  "no_cfg_parallel|"
  "with_cfg_parallel|--cfg_parallel_size 2"
)

echo "=========================================="
echo "Starting image-to-image (image edit) inference tests"
echo "Testing combinations of eager mode and CFG parallel"
echo "4 test cases per model:"
echo "  1. eager + cfg_parallel"
echo "  2. no eager + no_cfg_parallel"
echo "  3. no eager + cfg_parallel"
echo "  4. eager + no_cfg_parallel"
echo "Each model's outputs saved in its own directory"
echo "Note: If one task fails, others will continue"
echo "=========================================="
echo ""

# Check if required input images exist
echo "Checking for input images..."
if [ ! -f "./Qwen-Image/qwen-image_output_no_eager_no_cfg_parallel.png" ]; then
  echo "⚠ Warning: ./Qwen-Image/qwen-image_output_no_eager_no_cfg_parallel.png not found. Qwen models may fail."
fi
if [ ! -f "./LongCat-Image/longcat-image_output_no_eager_no_cfg_parallel.png" ]; then
  echo "⚠ Warning: ./LongCat-Image/longcat-image_output_no_eager_no_cfg_parallel.png not found. LongCat model may fail."
fi
echo ""

TASK_NUM=0
TOTAL_TASKS=$((${#MODELS[@]} * ${#EAGER_CONFIGS[@]} * ${#CFG_CONFIGS[@]}))

# Run experiments for each model and configuration
for model_info in "${MODELS[@]}"; do
  IFS='|' read -r model_name model_path input_image output_prefix scale_arg scale_value extra_args <<< "$model_info"
  
  # Create directory for this model
  model_dir="${model_name// /_}"
  mkdir -p "$model_dir"
  
  for eager_info in "${EAGER_CONFIGS[@]}"; do
    IFS='|' read -r eager_label eager_args <<< "$eager_info"
    
    for cfg_info in "${CFG_CONFIGS[@]}"; do
      IFS='|' read -r cfg_label cfg_args <<< "$cfg_info"
      TASK_NUM=$((TASK_NUM + 1))
      
      # Generate filenames inside model directory
      base_name="${model_name,,}"
      base_name="${base_name// /_}"
      output_file="$model_dir/${output_prefix}_${eager_label}_${cfg_label}.png"
      log_file="$model_dir/${base_name}_${eager_label}_${cfg_label}.log"
      task_label="$model_name ($eager_label + $cfg_label)"
      
      echo "=========================================="
      echo "$TASK_NUM/$TOTAL_TASKS: Running $task_label..."
      echo "=========================================="
      
      # Build and execute command
      cmd="python examples/offline_inference/image_to_image/image_edit.py \
        --model \"$model_path\" \
        --image \"$input_image\" \
        --prompt \"$PROMPT\" \
        --negative_prompt \"$NEGATIVE_PROMPT\" \
        --output \"$output_file\" \
        --num_inference_steps $NUM_INFERENCE_STEPS \
        --${scale_arg} $scale_value \
        $extra_args \
        $eager_args \
        $cfg_args"
      
      if eval "$cmd" 2>&1 | tee "$log_file"; then
        echo "✓ $task_label completed."
        SUCCESS_TASKS+=("$task_label")
      else
        echo "✗ $task_label FAILED."
        FAILED_TASKS+=("$task_label")
      fi
      echo ""
    done
  done
done

echo "=========================================="
echo "All tasks completed!"
echo "=========================================="
echo "Summary: ${#SUCCESS_TASKS[@]}/$TOTAL_TASKS successful, ${#FAILED_TASKS[@]}/$TOTAL_TASKS failed"
echo ""

if [ ${#SUCCESS_TASKS[@]} -gt 0 ]; then
  echo "✓ Successful tasks:"
  for task in "${SUCCESS_TASKS[@]}"; do
    echo "  - $task"
  done
  echo ""
fi

if [ ${#FAILED_TASKS[@]} -gt 0 ]; then
  echo "✗ Failed tasks:"
  for task in "${FAILED_TASKS[@]}"; do
    echo "  - $task"
  done
  echo ""
  echo "Check model directories for error logs."
  echo ""
fi

echo "Output directories:"
for model_info in "${MODELS[@]}"; do
  IFS='|' read -r model_name _ _ _ _ _ _ <<< "$model_info"
  model_dir="${model_name// /_}"
  echo "  - $model_dir/ (images and logs for $model_name)"
done

# Exit with error code if any tasks failed
if [ ${#FAILED_TASKS[@]} -gt 0 ]; then
  exit 1
fi

  • Video Generation
The bash script to run all video generation tasks
#!/bin/bash

# Script to run text-to-video, image-to-video, and text+image-to-video inference for all supported models
# Testing combinations of eager mode and CFG parallel
# Logs are saved to individual txt files for each experiment
# If one task fails, other tasks will continue to run

PROMPT="a lovely bunny holding a sign that says 'vllm-omni', dancing from left to right"
NEGATIVE_PROMPT="色调艳丽,过曝,静态,细节模糊不清,字幕,风格,作品,画作,画面,静止,整体发灰,最差质量,低质量,JPEG压缩残留,丑陋的,残缺的,多余的手指,画得不好的手部,画得不好的脸部,畸形的,毁容的,形态畸形的肢体,手指融合,静止不动的画面,杂乱的背景,三条腿,背景人很多,倒着走"

# Video generation parameters
HEIGHT=480
WIDTH=832
NUM_FRAMES=33
BOUNDARY_RATIO=0.875
NUM_INFERENCE_STEPS=40
FPS=16

# Input image for I2V and TI2V models
INPUT_IMAGE="./Qwen-Image/qwen-image_output_no_eager_no_cfg_parallel.png"

# Arrays to track success and failure
declare -a SUCCESS_TASKS
declare -a FAILED_TASKS

# Define models and their parameters
# Format: "model_name|model_path|flow_shift|guidance_scale|guidance_scale_high|needs_image"
declare -a MODELS=(
  "Wan2.2-T2V|Wan-AI/Wan2.2-T2V-A14B-Diffusers|12.0|4.0|4.0|no"
  "Wan2.2-I2V|Wan-AI/Wan2.2-I2V-A14B-Diffusers|12.0|5.0|6.0|yes"
  "Wan2.2-TI2V|Wan-AI/Wan2.2-TI2V-5B-Diffusers|12.0|4.0||yes"
)

# Eager mode configurations
declare -a EAGER_CONFIGS=(
  "no_eager|"
  "with_eager|--enforce_eager"
)

# CFG parallel configurations
declare -a CFG_CONFIGS=(
  "no_cfg_parallel|"
  "with_cfg_parallel|--cfg_parallel_size 2"
)

echo "=========================================="
echo "Starting text/image-to-video inference tests"
echo "Testing 3 models:"
echo "  - Wan2.2-T2V (text-to-video)"
echo "  - Wan2.2-I2V (image-to-video)"
echo "  - Wan2.2-TI2V (text+image-to-video)"
echo "Testing combinations of eager mode and CFG parallel"
echo "4 test cases per model:"
echo "  1. no_eager + no_cfg_parallel"
echo "  2. no_eager + with_cfg_parallel"
echo "  3. with_eager + no_cfg_parallel"
echo "  4. with_eager + with_cfg_parallel"
echo "Each model's outputs saved in its own directory"
echo "Note: If one task fails, others will continue"
echo "=========================================="
echo ""

# Check if input image exists for I2V and TI2V models
if [ ! -f "$INPUT_IMAGE" ]; then
  echo "⚠ Warning: Input image not found: $INPUT_IMAGE"
  echo "   I2V and TI2V models may fail. Please run text-to-image tests first."
  echo ""
fi

TASK_NUM=0
TOTAL_TASKS=$((${#MODELS[@]} * ${#EAGER_CONFIGS[@]} * ${#CFG_CONFIGS[@]}))

# Run experiments for each model and configuration
for model_info in "${MODELS[@]}"; do
  IFS='|' read -r model_name model_path flow_shift guidance_scale guidance_scale_high needs_image <<< "$model_info"
  
  # Create directory for this model
  model_dir="${model_name// /_}"
  mkdir -p "$model_dir"
  
  for eager_info in "${EAGER_CONFIGS[@]}"; do
    IFS='|' read -r eager_label eager_args <<< "$eager_info"
    
    for cfg_info in "${CFG_CONFIGS[@]}"; do
      IFS='|' read -r cfg_label cfg_args <<< "$cfg_info"
      TASK_NUM=$((TASK_NUM + 1))
      
      # Generate filenames inside model directory
      base_name="${model_name,,}"
      base_name="${base_name// /_}"
      base_name="${base_name//./_}"
      output_file="$model_dir/${base_name}_output_${eager_label}_${cfg_label}.mp4"
      log_file="$model_dir/${base_name}_${eager_label}_${cfg_label}.log"
      task_label="$model_name ($eager_label + $cfg_label)"
      
      echo "=========================================="
      echo "$TASK_NUM/$TOTAL_TASKS: Running $task_label..."
      echo "=========================================="
      
      # Build command based on model type
      # Use image_to_video.py for I2V/TI2V models, text_to_video.py for T2V
      if [ "$needs_image" = "yes" ]; then
        # Image-to-Video or Text+Image-to-Video
        cmd="python examples/offline_inference/image_to_video/image_to_video.py \
          --model \"$model_path\" \
          --image \"$INPUT_IMAGE\" \
          --prompt \"$PROMPT\" \
          --negative_prompt \"$NEGATIVE_PROMPT\" \
          --height $HEIGHT \
          --width $WIDTH \
          --num_frames $NUM_FRAMES \
          --guidance_scale $guidance_scale"
        
        # Add guidance_scale_high if specified
        if [ -n "$guidance_scale_high" ]; then
          cmd="$cmd --guidance_scale_high $guidance_scale_high"
        fi
        
        cmd="$cmd \
          --boundary_ratio $BOUNDARY_RATIO \
          --num_inference_steps $NUM_INFERENCE_STEPS \
          --flow_shift $flow_shift \
          --fps $FPS \
          --output \"$output_file\" \
          $eager_args \
          $cfg_args"
      else
        # Text-to-Video
        cmd="python examples/offline_inference/text_to_video/text_to_video.py \
          --model \"$model_path\" \
          --prompt \"$PROMPT\" \
          --negative_prompt \"$NEGATIVE_PROMPT\" \
          --height $HEIGHT \
          --width $WIDTH \
          --num_frames $NUM_FRAMES \
          --guidance_scale $guidance_scale"
        
        # Add guidance_scale_high if specified
        if [ -n "$guidance_scale_high" ]; then
          cmd="$cmd --guidance_scale_high $guidance_scale_high"
        fi
        
        cmd="$cmd \
          --boundary_ratio $BOUNDARY_RATIO \
          --num_inference_steps $NUM_INFERENCE_STEPS \
          --flow_shift $flow_shift \
          --fps $FPS \
          --output \"$output_file\" \
          $eager_args \
          $cfg_args"
      fi
      
      # Execute command
      if eval "$cmd" 2>&1 | tee "$log_file"; then
        echo "✓ $task_label completed."
        SUCCESS_TASKS+=("$task_label")
      else
        echo "✗ $task_label FAILED."
        FAILED_TASKS+=("$task_label")
      fi
      echo ""
    done
  done
done

echo "=========================================="
echo "All tasks completed!"
echo "=========================================="
echo "Summary: ${#SUCCESS_TASKS[@]}/$TOTAL_TASKS successful, ${#FAILED_TASKS[@]}/$TOTAL_TASKS failed"
echo ""

if [ ${#SUCCESS_TASKS[@]} -gt 0 ]; then
  echo "✓ Successful tasks:"
  for task in "${SUCCESS_TASKS[@]}"; do
    echo "  - $task"
  done
  echo ""
fi

if [ ${#FAILED_TASKS[@]} -gt 0 ]; then
  echo "✗ Failed tasks:"
  for task in "${FAILED_TASKS[@]}"; do
    echo "  - $task"
  done
  echo ""
  echo "Check model directories for error logs."
  echo ""
fi

echo "Output directories:"
for model_info in "${MODELS[@]}"; do
  IFS='|' read -r model_name _ _ <<< "$model_info"
  model_dir="${model_name// /_}"
  echo "  - $model_dir/ (videos and logs for $model_name)"
done

# Exit with error code if any tasks failed
if [ ${#FAILED_TASKS[@]} -gt 0 ]; then
  exit 1
fi

Test Result

  • Unit test
========================================================================= 3 passed, 3 warnings in 458.11s (0:07:38) ==========================================================================

Passed all tests.

  • Setup

    • vllm: 0.14.0
    • python: 3.12
    • pytorch: 2.9.1
    • batch size: 1
    • platform: H800 sever
  • Text-To-Image

model cfg_parallel_size time (torch.compile) time (eager) generated image
FLUX.2-KLEIN-4B 1 6.29s 8.17s flux 2-klein-4b_output_no_eager_no_cfg_parallel
FLUX.2-KLEIN-4B 2 5.15s 6.20s flux 2-klein-4b_output_no_eager_with_cfg_parallel
LongCat-Image 1 20.36s 20.38s longcat-image_output_no_eager_no_cfg_parallel
LongCat-Image 2 12.78s 12.71s longcat-image_output_no_eager_with_cfg_parallel
Ovis-Image 1 9.18s 11.85s ovis-image_output_no_eager_no_cfg_parallel
Ovis-Image 2 6.65s 7.89s ovis-image_output_no_eager_with_cfg_parallel
Qwen-Image 1 13.89s 17.80s qwen-image_output_no_eager_no_cfg_parallel
Qwen-Image 2 9.60s 11.28s qwen-image_output_no_eager_with_cfg_parallel
Stable-Diffusion-3 1 2.98s 4.92s stable-diffusion-3_output_no_eager_no_cfg_parallel
Stable-Diffusion-3 2 3.43s 4.83s stable-diffusion-3_output_no_eager_with_cfg_parallel

The speed acceleration performance of SD3 is not good.

  • Image-Edit
model cfg_parallel_size time (torch.compile) time (eager) generated image
Qwen-Image-Edit 1 35.55s 43.22s output_image_edit_no_eager_no_cfg_parallel
Qwen-Image-Edit 2 20.29s 24.21s output_image_edit_no_eager_with_cfg_parallel
Qwen-Image-Edit-2509 1 29.96s 37.17s output_image_edit_no_eager_no_cfg_parallel
Qwen-Image-Edit-2509 2 17.21s 21.76s output_image_edit_no_eager_with_cfg_parallel
LongCat-Image-Edit 1 41.11s 41.10s output_image_edit_no_eager_no_cfg_parallel
LongCat-Image-Edit 2 24.27s 23.07s output_image_edit_no_eager_with_cfg_parallel
  • Video Generation
HEIGHT=480
WIDTH=832
NUM_FRAMES=33
NUM_INFERENCE_STEPS=40
FPS=16
model cfg_parallel_size time (torch.compile) video
Wan-AI/Wan2.2-T2V-A14B-Diffusers 1 80.9s https://github.com/user-attachments/assets/561d454b-8a37-4599-8c39-914bcde28085
Wan-AI/Wan2.2-T2V-A14B-Diffusers 2 42.2s https://github.com/user-attachments/assets/5a34c7da-d957-4060-b695-3bdbc827f977
Wan-AI/Wan2.2-I2V-A14B-Diffusers 1 81.7s https://github.com/user-attachments/assets/2e5b26ff-7533-4f98-ad14-f7bb66eb340a
Wan-AI/Wan2.2-I2V-A14B-Diffusers 2 43.2s https://github.com/user-attachments/assets/4af12971-1609-4565-827d-4374e1880e16
Wan-AI/Wan2.2-TI2V-5B-Diffusers 1 13.4s https://github.com/user-attachments/assets/c06885fb-3d79-4ab4-8a5f-b1b2bd9919d8
Wan-AI/Wan2.2-TI2V-5B-Diffusers 2 10.5s https://github.com/user-attachments/assets/6b49a06a-3e5b-407f-b330-97d3f02e0c06

CC List.

@ZJY0516 @SamitHuang @david6666666 @hsliuustc0106


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft.

BEFORE SUBMITTING, PLEASE READ https://github.com/vllm-project/vllm-omni/blob/main/CONTRIBUTING.md (anything written below this line will be removed by GitHub Actions)

@wtomin wtomin changed the title [Refactor]: CFG parallel abstraction [WIP][Refactor]: CFG parallel abstraction Jan 19, 2026
@wtomin wtomin force-pushed the cfg-base-pipeline branch 2 times, most recently from 75e8ef4 to 51891b6 Compare January 26, 2026 11:13
@david6666666 david6666666 added this to the v0.14.0 milestone Jan 27, 2026
Comment thread vllm_omni/diffusion/distributed/cfg_parallel.py Outdated
Comment thread vllm_omni/diffusion/models/qwen_image/pipeline_qwen_image.py
Comment thread vllm_omni/diffusion/distributed/cfg_parallel.py Outdated
Comment thread vllm_omni/diffusion/distributed/cfg_parallel.py Outdated
Comment thread vllm_omni/diffusion/distributed/cfg_parallel.py
@wtomin wtomin force-pushed the cfg-base-pipeline branch from 1f02307 to 739b668 Compare January 27, 2026 09:14
@wtomin wtomin marked this pull request as ready for review January 27, 2026 09:17
@wtomin wtomin requested a review from hsliuustc0106 as a code owner January 27, 2026 09:17
@wtomin wtomin changed the title [WIP][Refactor]: CFG parallel abstraction [Refactor]: CFG parallel abstraction Jan 27, 2026
@wtomin wtomin changed the title [Refactor]: CFG parallel abstraction [Perf]: CFG parallel abstraction Jan 27, 2026
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 739b668791

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment thread examples/offline_inference/text_to_video/text_to_video.py Outdated
Comment thread examples/offline_inference/text_to_video/text_to_video.py Outdated
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR introduces a shared classifier-free guidance (CFG) parallelization abstraction via CFGParallelMixin (and QwenImageCFGParallelMixin) and refactors multiple diffusion pipelines and examples to use it, enabling rank-split conditional/unconditional denoising across a dedicated CFG process group. It also wires CFG-parallel configuration into the offline video examples and updates the user documentation to describe and advertise CFG-Parallel support for the relevant models.

Changes:

  • Add CFGParallelMixin and QwenImageCFGParallelMixin implementing reusable predict_noise_maybe_with_cfg and scheduler_step_maybe_with_cfg helpers for both sequential and CFG-parallel execution.
  • Refactor image and video diffusion pipelines (Qwen-Image*, LongCat-Image*, Ovis-Image, Flux2-Klein, Wan2.2 T2V/I2V/TI2V, Stable-Diffusion-3) to use the new mixins instead of ad-hoc CFG logic, preserving editing-specific slicing and normalization behaviors.
  • Extend offline text-to-video and image-to-video examples and the diffusion acceleration docs to expose cfg_parallel_size, describe CFG-Parallel usage, and mark supported models appropriately.

Reviewed changes

Copilot reviewed 17 out of 17 changed files in this pull request and generated 3 comments.

Show a summary per file
File Description
vllm_omni/diffusion/distributed/cfg_parallel.py Introduces CFGParallelMixin and QwenImageCFGParallelMixin, encapsulating CFG sequential/parallel noise prediction, combination, optional normalization, and synchronized scheduler stepping.
vllm_omni/diffusion/models/qwen_image/pipeline_qwen_image.py Switches QwenImagePipeline to inherit QwenImageCFGParallelMixin and delegate its diffusion loop to the shared CFG-aware diffuse implementation.
vllm_omni/diffusion/models/qwen_image/pipeline_qwen_image_edit.py Refactors Qwen image edit pipeline to use QwenImageCFGParallelMixin.diffuse, passing image latents and enabling CFG normalization through the mixin.
vllm_omni/diffusion/models/qwen_image/pipeline_qwen_image_edit_plus.py Same as above for the “Edit Plus” variant, delegating CFG-parallel diffusion (with normalization) to the mixin and passing attention kwargs through.
vllm_omni/diffusion/models/qwen_image/pipeline_qwen_image_layered.py Adopts QwenImageCFGParallelMixin, removing custom CFG-parallel logic and routing layered-image diffusion (with image latents and extra transformer kwargs) through the shared mixin.
vllm_omni/diffusion/models/longcat_image/pipeline_longcat_image.py Makes LongCatImagePipeline a CFGParallelMixin user, replacing inline CFG math with predict_noise_maybe_with_cfg/scheduler_step_maybe_with_cfg and adding an overridable cfg_normalize_function plus scheduler_step wrapper.
vllm_omni/diffusion/models/longcat_image/pipeline_longcat_image_edit.py Enables CFG parallelism for LongCat image editing via CFGParallelMixin, refactors the loop to call predict_noise_maybe_with_cfg (with output slicing) and scheduler_step_maybe_with_cfg, and adds a local scheduler_step.
vllm_omni/diffusion/models/ovis_image/pipeline_ovis_image.py Refactors Ovis-Image denoising into a diffuse function using CFGParallelMixin helpers, plus a custom scheduler_step that preserves MPS dtype behavior.
vllm_omni/diffusion/models/flux2_klein/pipeline_flux2_klein.py Updates Flux.2-Klein’s loop to use CFGParallelMixin CFG handling and scheduler stepping, including slicing when image latents are concatenated (editing mode), and defines a scheduler-step wrapper.
vllm_omni/diffusion/models/wan2_2/pipeline_wan2_2.py Makes Wan2.2 T2V pipeline inherit CFGParallelMixin and replace inline CFG logic with predict_noise_maybe_with_cfg and scheduler_step_maybe_with_cfg, while still supporting dual-transformer guidance scales.
vllm_omni/diffusion/models/wan2_2/pipeline_wan2_2_i2v.py Same refactor for Wan2.2 I2V, building positive/negative kwargs (including image encoder embeds) and delegating CFG/no-CFG behavior to the mixin plus a pipeline-specific predict_noise.
vllm_omni/diffusion/models/wan2_2/pipeline_wan2_2_ti2v.py Same pattern for Wan2.2 TI2V, with patch-wise timesteps and a local predict_noise helper used by the mixin.
vllm_omni/diffusion/models/sd3/pipeline_sd3.py Makes SD3 pipeline a CFGParallelMixin user, introduces a dedicated diffuse method that calls predict_noise_maybe_with_cfg/scheduler_step_maybe_with_cfg, and wires forward through this method.
examples/offline_inference/text_to_video/text_to_video.py Imports DiffusionParallelConfig, adds --cfg_parallel_size CLI flag, includes it in DiffusionParallelConfig, and passes the parallel config plus enforce_eager into Omni.
examples/offline_inference/image_to_video/image_to_video.py Adds DiffusionParallelConfig, --cfg_parallel_size, and --enforce_eager support; constructs parallel_config with the requested CFG parallel size and passes it into Omni.
docs/user_guide/diffusion_acceleration.md Updates acceleration support tables to mark LongCat, Ovis, SD3, and Wan2.2 as CFG-Parallel capable and extends the VideoGen table with a CFG-Parallel column.
docs/user_guide/diffusion/parallelism_acceleration.md Rewrites the CFG-Parallel section to use CFGParallelMixin/QwenImageCFGParallelMixin as the canonical examples, documenting predict_noise_maybe_with_cfg, scheduler_step_maybe_with_cfg, and customization points like predict_noise and cfg_normalize_function.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread vllm_omni/diffusion/distributed/cfg_parallel.py Outdated
Comment thread vllm_omni/diffusion/distributed/cfg_parallel.py
@wtomin wtomin force-pushed the cfg-base-pipeline branch from 06a074b to 921223e Compare January 28, 2026 09:14
@david6666666 david6666666 added the high priority high priority issue, needs to be done asap label Jan 28, 2026
@wtomin wtomin force-pushed the cfg-base-pipeline branch from fea1c6f to fe6728c Compare January 29, 2026 02:03
Comment thread vllm_omni/diffusion/models/qwen_image/pipeline_qwen_image.py
Comment thread docs/user_guide/diffusion/parallelism_acceleration.md
"--cfg_parallel_size",
type=int,
default=1,
choices=[1, 2],
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what is the meaning of setting CFG parallel size to 1?

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also, this cfg size checking should also be done in the CFG parallelism implementation, not just the offline examples

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CFG parallel default size is 1, because in vll_omni/diffusion/data.py, world_size is defined as a product of multiple parallel sizes:

        self.world_size = (
            self.pipeline_parallel_size
            * self.data_parallel_size
            * self.tensor_parallel_size
            * self.ulysses_degree
            * self.ring_degree
            * self.cfg_parallel_size
        )

Besides, I revised data.py to check cfg size in configuration.

Comment thread examples/offline_inference/text_to_image/README.md
@wtomin wtomin force-pushed the cfg-base-pipeline branch from fc5ba2f to b5d7733 Compare January 29, 2026 08:55
Comment thread docs/user_guide/diffusion/parallelism_acceleration.md
Comment thread vllm_omni/diffusion/distributed/cfg_parallel.py Outdated
Comment thread tests/diffusion/distributed/test_cfg_parallel.py Outdated
Comment thread vllm_omni/diffusion/distributed/cfg_parallel.py Outdated
Comment thread vllm_omni/diffusion/models/flux2_klein/pipeline_flux2_klein.py Outdated
Comment thread vllm_omni/diffusion/models/flux2_klein/pipeline_flux2_klein.py Outdated
@wtomin wtomin force-pushed the cfg-base-pipeline branch from 5ce0e19 to 250a4c1 Compare January 30, 2026 02:49
@SamitHuang SamitHuang added the ready label to trigger buildkite CI label Jan 30, 2026
@ZJY0516
Copy link
Copy Markdown
Member

ZJY0516 commented Jan 30, 2026

@wtomin Should we also add a e2e test using riverclouds/qwen_image_random

@wtomin
Copy link
Copy Markdown
Collaborator Author

wtomin commented Jan 30, 2026

@wtomin Should we also add a e2e test using riverclouds/qwen_image_random

Maybe slow. I asked @Gaohan123 and the conclusion is that we should use unit test for now.

# Compute the previous noisy sample x_t -> x_t-1 with automatic CFG sync
latents = self.scheduler_step_maybe_with_cfg(noise_pred, t, latents, do_true_cfg)

if torch.cuda.is_available():
Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ZJY0516 @SamitHuang I have moved the per-step torch.cuda.empty_cache() call to the pipeline of Wan2.2 series models.

Now it works fine with 480x832x33 resolution, and only call it once.

Copy link
Copy Markdown
Member

@ZJY0516 ZJY0516 Jan 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm afraid it will break other platform, for example, could non-CUDA platforms run into OOM errors? And We'd better add a comment for why we do this here

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Of course. Changed torch.cuda.empty_cache() to current_omni_platform.empty_cache() and add a comment on why adding it here.

wtomin added 18 commits January 30, 2026 17:46
Signed-off-by: Didan Deng <33117903+wtomin@users.noreply.github.com>
Signed-off-by: Didan Deng <33117903+wtomin@users.noreply.github.com>
Signed-off-by: Didan Deng <33117903+wtomin@users.noreply.github.com>
Signed-off-by: Didan Deng <33117903+wtomin@users.noreply.github.com>
Signed-off-by: Didan Deng <33117903+wtomin@users.noreply.github.com>
Signed-off-by: Didan Deng <33117903+wtomin@users.noreply.github.com>
Signed-off-by: Didan Deng <33117903+wtomin@users.noreply.github.com>
Signed-off-by: Didan Deng <33117903+wtomin@users.noreply.github.com>
Signed-off-by: Didan Deng <33117903+wtomin@users.noreply.github.com>
Signed-off-by: Didan Deng <33117903+wtomin@users.noreply.github.com>
Signed-off-by: Didan Deng <33117903+wtomin@users.noreply.github.com>
Signed-off-by: Didan Deng <33117903+wtomin@users.noreply.github.com>
Signed-off-by: Didan Deng <33117903+wtomin@users.noreply.github.com>
Signed-off-by: Didan Deng <33117903+wtomin@users.noreply.github.com>
Signed-off-by: Didan Deng <33117903+wtomin@users.noreply.github.com>
Signed-off-by: Didan Deng <33117903+wtomin@users.noreply.github.com>
Signed-off-by: Didan Deng <33117903+wtomin@users.noreply.github.com>
Signed-off-by: Didan Deng <33117903+wtomin@users.noreply.github.com>
@wtomin wtomin force-pushed the cfg-base-pipeline branch from d6a8fdf to 5a9af70 Compare January 30, 2026 09:46
@hsliuustc0106 hsliuustc0106 enabled auto-merge (squash) January 30, 2026 15:30
@hsliuustc0106 hsliuustc0106 merged commit 23cf48d into vllm-project:main Jan 30, 2026
7 checks passed
dongbo910220 pushed a commit to dongbo910220/vllm-omni that referenced this pull request Feb 1, 2026
Signed-off-by: Didan Deng <33117903+wtomin@users.noreply.github.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>
@wtomin wtomin deleted the cfg-base-pipeline branch February 2, 2026 07:24
with1015 added a commit to with1015/vllm-omni that referenced this pull request Apr 6, 2026
* [Frontend][Model] Support batch request with refined OmniDiffusionReq… (#797)

Signed-off-by: Huang, Zeyu <11222265+fhfuih@users.noreply.github.com>

* [Model]: add FLUX.1-dev model (#853)

* [BugFix] ignore mm data from stages to async omni (#954)

Signed-off-by: dengyunyang <584797741@qq.com>

* Revert "[BugFix] ignore mm data from stages to async omni" (#1023)

* [Bugfix] Modify output to model_runner_output (#1026)

Signed-off-by: gcanlin <canlinguosdu@gmail.com>

* [Feature] Support cache-dit for Wan 2.2 inference (#1021)

Signed-off-by: samithuang <285365963@qq.com>
Signed-off-by: Samit <285365963@qq.com>

* [Doc]Format profiling doc (#993)

Signed-off-by: lishunyang <lishunyang12@163.com>
Signed-off-by: Hongsheng Liu <liuhongsheng4@huawei.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* [Hardware] Support platforms and plugin system (#774)

Signed-off-by: gcanlin <canlinguosdu@gmail.com>

* [Core]: KV Cache Transfer Encapsulation (#979)

Signed-off-by: princepride <wangzhipeng628@gmail.com>
Signed-off-by: 汪志鹏 <wangzhipeng628@gmail.com>

* [Test]Delete skip mark for amd ci test and fix CI failure (#927)

Signed-off-by: wangyu31577 <wangyu31577@hundsun.com>
Signed-off-by: Hongsheng Liu <liuhongsheng4@huawei.com>
Co-authored-by: wangyu31577 <wangyu31577@hundsun.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* [Bugfix][Doc]Specify Qwen3-TTS model name for each task type (#1036)

Signed-off-by: Kyle Huang <yellowsea@gmail.com>

* [Misc] pin version of fa3-fwd (#1051)

Signed-off-by: zjy0516 <riverclouds.zhu@qq.com>

* [CI] [ROCm] Add more AMD CI tests (#1039)

Signed-off-by: tjtanaa <tunjian.tan@embeddedllm.com>

* [Bugfix] fix qwen image layerd in dummy run (#1027)

Signed-off-by: zjy0516 <riverclouds.zhu@qq.com>

* [BugFix] Fix noisy output without setting a seed in Qwen Image (#1043)

Signed-off-by: natureofnature <wzliu@connect.hku.hk>

* [bugfix] remove vllm speech route (#1060)

Signed-off-by: linyueqian <linyueqian@outlook.com>

* [Debug] Update GLM-Image Pipeline (#1049)

Co-authored-by: root <root@hk01dgx028.cm.cluster>

* [Diffusion][Bugfix] Fix the flash_attn backends selection logic (#983)

Signed-off-by: mxuax <mxuax@connect.ust.hk>
Signed-off-by: XU Mingshi <91017482+mxuax@users.noreply.github.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* [BugFix] Fix the accuracy issue of multimodal input. (#1020)

Signed-off-by: amy-why-3459 <wuhaiyan17@huawei.com>
Co-authored-by: Rein Yang <ruiruyang2@gmail.com>

* [Bugfix] Set VaeImageProcessor `do_convert_rgb` True (#1032)

Signed-off-by: gcanlin <canlinguosdu@gmail.com>

* [feat]: adapt batch request for flux (#1028)

Signed-off-by: wuzhongjian wuzhongjian_yewu@cmss.chinamobile.com

* [CI] Change Qwen3 Omni stage placement strategy  (#1072)

Signed-off-by: ZeldaHuang <hzm414167@alibaba-inc.com>

* [BugFix] Fix to use correct attn backend (#1038)

Signed-off-by: Divyansh Singhvi <divyanshsinghvi@gmail.com>

* [Perf] Qwen3 Omni talker mtp optimization (#1005)

Signed-off-by: ZeldaHuang <hzm414167@alibaba-inc.com>
Signed-off-by: Hongsheng Liu <liuhongsheng4@huawei.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* [Wan2.2] Optimize memory usage with conditional transformer loading (#980)

Signed-off-by: Lin, Fanli <fanli.lin@intel.com>
Signed-off-by: Samit <285365963@qq.com>
Co-authored-by: Samit <285365963@qq.com>

* [Feat] Support XPU Backend in vLLM-Omni (#191)

Signed-off-by: Fanli Lin <fanli.lin@intel.com>
Signed-off-by: Fanli Lin <fanli0116@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* [Fix] stabilize diffusion images LoRA E2E across CI drift (#1075)

Signed-off-by: dongbo910220 <1275604947@qq.com>

* [Bugfix][Test] Re-enable the log simple tests (#1065)

Signed-off-by: gcanlin <canlinguosdu@gmail.com>

* [Bugfix] pr conflict fix, bugfix ignore mm data from stages to async omni (#1025)

Signed-off-by: dengyunyang <584797741@qq.com>

* [Doc][Bagel] Add BAGEL-7B-MoT documentation and edit the default stage configuration (#987)

Signed-off-by: Ding Zuhao <e1583181@u.nus.edu>
Signed-off-by: jzz <e1583181@u.nus.edu>

* [Fix] Increase max wait time for server readiness to accommodate model loading (#1089)

Signed-off-by: Andy Zhou <46011930+AndyZhou952@users.noreply.github.com>

* [Benchmark] Add vLLM-Omni Omni model online benchmark (#780)

Signed-off-by: wangyu31577 <wangyu31577@hundsun.com>
Signed-off-by: wangyu <53896905+yenuo26@users.noreply.github.com>
Co-authored-by: wangyu31577 <wangyu31577@hundsun.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* [Bugfix] Remove Mooncake/Yuanrong connector import warning (#1091)

Signed-off-by: natureofnature <wzliu@connect.hku.hk>

* fix: UnboundLocalError for role in streaming audio/image responses (#784)

Signed-off-by: Pierre Le Guen <26087574+PierreLeGuen@users.noreply.github.com>

* [Misc] update wechat image (#1096)

* [Feature] Support DiT Layerwise (Blockwise) CPU Offloading (#858)

Signed-off-by: yuanheng <jonathan.zhaoyh@gmail.com>
Signed-off-by: Hongsheng Liu <liuhongsheng4@huawei.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* [BugFix] Modify max_tokens and modify the log and fix #1103 (#1097)

Signed-off-by: amy-why-3459 <wuhaiyan17@huawei.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* [BugFix] Fix modulate_index shape error in Qwen-Image-Edit Task (#1100)

Signed-off-by: mxuax <mxuax@connect.ust.hk>
Signed-off-by: XU Mingshi <91017482+mxuax@users.noreply.github.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* [Platform] Add supports_torch_inductor interface (#1108)

Signed-off-by: gcanlin <canlinguosdu@gmail.com>

* [BugFix] Fix Qwen3 Omni talker mtp torch.compile startup error (#1104)

Signed-off-by: ram16g <anlianfengjie@163.com>
Signed-off-by: ZeldaHuang <hzm414167@alibaba-inc.com>
Co-authored-by: ram16g <anlianfengjie@163.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* [Bugfix] fix request_id of image generation in api server (#1112)

Signed-off-by: zjy0516 <riverclouds.zhu@qq.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* [Perf]: CFG parallel abstraction (#851)

Signed-off-by: Didan Deng <33117903+wtomin@users.noreply.github.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* [BugFix] Fix Qwen3 TTS 0.6B profile run hang (#995) (#1082)

* [CI] [ROCm] Quick fix amd ci (#1116)

Signed-off-by: tjtanaa <tunjian.tan@embeddedllm.com>

* [Bugfix] fix benchmark audio timing error and add benchmark test (#1109)

Signed-off-by: wangyu31577 <wangyu31577@hundsun.com>
Co-authored-by: wangyu31577 <wangyu31577@hundsun.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* [Bugfix][Qwen3TTS] Load speaker_id/voices from model configuration (#1079)

Signed-off-by: pablo <juanz9312@gmail.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>
Co-authored-by: WeiQing Chen <40507679+david6666666@users.noreply.github.com>

* [NPU] Align with GPUModelRunner (#1114)

Signed-off-by: gcanlin <canlinguosdu@gmail.com>

* [FEATURE] /v1/images/edit interface (#1101)

Signed-off-by: dengyunyang <584797741@qq.com>

* [Bugfix] Fix NPU SDPA attention mask shape and semantics (#1031)

Signed-off-by: gcanlin <canlinguosdu@gmail.com>
Co-authored-by: muziyuhui666 <111362884+muziyuhui666@users.noreply.github.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* [TeaCache]: Add Coefficient Estimation (#940)

Signed-off-by: princepride <wangzhipeng628@gmail.com>
Signed-off-by: 汪志鹏 <wangzhipeng628@gmail.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* [CI]: Bagel E2E Smoked Test (#1074)

Signed-off-by: princepride <wangzhipeng628@gmail.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* [Misc] Bump version to 0.14.0 (#1128)

Signed-off-by: Roger Wang <hey@rogerw.io>

* [Doc] First stable release of vLLM-Omni (#1129)

Signed-off-by: Roger Wang <hey@rogerw.io>

* [Misc] Align error handling with upstream vLLM v0.14.0 (#1122)

Signed-off-by: anna <lee.anna@navercorp.com>
Co-authored-by: anna <lee.anna@navercorp.com>

* [Feature] add Tensor Parallelism to LongCat-Image(-Edit) (#926)

Signed-off-by: Rustam Khadipash <16683750+hadipash@users.noreply.github.com>

* [CI] Temporarily remove slow tests. (#1143)

Signed-off-by: Alicia <115451386+congw729@users.noreply.github.com>
Signed-off-by: princepride <wangzhipeng628@gmail.com>
Co-authored-by: princepride <wangzhipeng628@gmail.com>

* [CI] Refactor test_sequence_parallel.py and add a warmup run for more accurate performance stat (#1165)

Signed-off-by: mxuax <mxuax@connect.ust.hk>
Signed-off-by: XU Mingshi <91017482+mxuax@users.noreply.github.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* Dev/rebase v0.15.0 (#1159)

Signed-off-by: Taichang Zhou <tzhouam@connect.ust.hk>
Signed-off-by: tzhouam <tzhouam@connect.ust.hk>
Signed-off-by: princepride <wangzhipeng628@gmail.com>
Co-authored-by: 汪志鹏 <wangzhipeng628@gmail.com>

* Docs update paper link (#1169)

Signed-off-by: hsliu <liuhongsheng4@huawei.com>
Signed-off-by: hsliu_ustc <hsliu_ustc@noreply.gitcode.com>
Co-authored-by: hsliu_ustc <hsliu_ustc@noreply.gitcode.com>

* [Debug] Clear Dockerfile.ci to accelerate build image (#1172)

Signed-off-by: tzhouam <tzhouam@connect.ust.hk>

* [Debug] Correct Unreasonable Long Timeout (#1175)

Signed-off-by: tzhouam <tzhouam@connect.ust.hk>

* [Doc]Fix - Align with repo. (#1176)

Signed-off-by: Alicia <115451386+congw729@users.noreply.github.com>

* [Bugfix][Qwen-Image-Edit] Add a warning log for none negative_prompt (#1170)

Signed-off-by: gcanlin <canlinguosdu@gmail.com>

* [Bugfix] fix qwen image oom (#1168)

Signed-off-by: zjy0516 <riverclouds.zhu@qq.com>

* [Hardware] Disable compile of diffusion on XPU (#1148)

Signed-off-by: zhenwei-intel <zhenwei.liu@intel.com>

* [Doc] Fix vLLM version in user docs (#1179)

Signed-off-by: Yuanheng Zhao <jonathan.zhaoyh@gmail.com>

* [Refactor] Refactor async chunk and fix the shape mismatch issue (#1151)

Signed-off-by: amy-why-3459 <wuhaiyan17@huawei.com>

* bugfix: /images/edits endpoint fails pipeline data format check (#1141)

Signed-off-by: Huang, Zeyu <11222265+fhfuih@users.noreply.github.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* [Perf] resolving prolonged `cudastreamsynchronize` execution in z image processing (#1105)

Signed-off-by: erfgss <97771661+erfgss@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* [Bugfix] modify RTF use audio_e2e/audio_duration (#1157)

Signed-off-by: wangyu31577 <wangyu31577@hundsun.com>
Co-authored-by: wangyu31577 <wangyu31577@hundsun.com>

* [Doc] Highlight paper & slides. (#1186)

Signed-off-by: Alicia <115451386+congw729@users.noreply.github.com>

* [chore] Remove zmq context initialize (#1187)

Signed-off-by: xiedeyantu <czjourney@163.com>

* [NPU] Update Dockerfile and docs for v0.14.0 (#671)

Signed-off-by: gcanlin <canlinguosdu@gmail.com>

* [Bugfix] E2E metric incorrect qwen3-omni with async chunk feature (#1018)

Signed-off-by: Junhong Liu <98734602+LJH-LBJ@users.noreply.github.com>
Signed-off-by: Junhong Liu <ljh_lbj@163.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* [Doc] opt doc (#1118)

Signed-off-by: David Chen <530634352@qq.com>

* [Bugfix] Fix tp+sp accuracy, incorrect process group mapping (#1178)

Signed-off-by: David Chen <530634352@qq.com>

* [Feature] Enable use_audio_in_video for Qwen 3 Omni Online (#1198)

Signed-off-by: tzhouam <tzhouam@connect.ust.hk>

* [Bugfix] async_chunk rebase v0.15.0 (#1195)

Signed-off-by: amy-why-3459 <wuhaiyan17@huawei.com>

* [feature]: support flux cache_dit (#1145)

Co-authored-by: Jiangyun Zhu <riverclouds.zhu@qq.com>

* [CI] Add CI branch coverage calculation,  fix statement coverage results and add log before test for buildkite  log group (#1120)

Signed-off-by: wangyu31577 <wangyu31577@hundsun.com>
Co-authored-by: wangyu31577 <wangyu31577@hundsun.com>

* [Wan 2.2][Diffusion] Add TP Support (#964)

Signed-off-by: weichen <calvin_zhu0210@outlook.com>

* [Hardware] [Feat] Setup platform dependent package installation (#1046)

Signed-off-by: tjtanaa <tunjian.tan@embeddedllm.com>
Co-authored-by: PopSoda2002 <zhouhp.me@gmail.com>
Co-authored-by: gcanlin <canlinguosdu@gmail.com>

* [XPU] Fix XPU UTs for basic coverage (#1164)

Signed-off-by: Yan Ma <yan.ma@intel.com>

* [Test] Add BuildKite test-full script for full CI. (#867)

Signed-off-by: wangyu31577 <wangyu31577@hundsun.com>
Co-authored-by: wangyu31577 <wangyu31577@hundsun.com>

* [Refactor] Reuse upstream Qwen3MoeSparseMoeBlock (#1202)

Signed-off-by: gcanlin <canlinguosdu@gmail.com>

* [Bugfix] Fix wan2.2 ti2v (#1221)

Signed-off-by: mxuax <mxuax@connect.ust.hk>
Signed-off-by: XU Mingshi <91017482+mxuax@users.noreply.github.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* [Bugfix] Fix '--max-generated-image-size' cli args type (#1249)

Signed-off-by: ApsarasX <apsarax@outlook.com>

* [Bugfix] Ensure seed=0 is correctly handled in image edit (#1248)

Signed-off-by: ApsarasX <apsarax@outlook.com>

* [Docs] Add example image download step to Image-To-Video examples (#1258)

Signed-off-by: lishunyang <lishunyang12@163.com>

* [Bugfix] Fix padding bug in 12Hz tokenizer ConvTranspose1d decode (#1241)

Signed-off-by: linyueqian <linyueqian@outlook.com>

* [bugfix] Fix multimodal_output property to check completion outputs where audio data is attached (#1203)

Signed-off-by: linyueqian <linyueqian@outlook.com>

* [Doc] Update QA relevant to quantization  (#1257)

Signed-off-by: lishunyang <lishunyang12@163.com>

* [Bugfix] Fix Doc link Rrror (#1263)

Signed-off-by: lishunyang <lishunyang12@163.com>

* Process-Scoped GPU Memory Accounting (#1204)

Signed-off-by: Divyansh Singhvi <divyanshsinghvi@gmail.com>

* [ComfyUI]: ComfyUI integration (#1113)

Signed-off-by: Huang, Zeyu <11222265+fhfuih@users.noreply.github.com>

* fix: add diffusion offload args to OmniConfig group instead of serve_parser (#1271)

Signed-off-by: Chenguang ZHENG <645327136@qq.com>

* [Doc] Adding models/pipelines/features Tutorial (#1196)

Signed-off-by: Didan Deng <33117903+wtomin@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: dongbo910220 <32610838+dongbo910220@users.noreply.github.com>

* [CI] Add env variable check for nightly CI  (#1281)

Signed-off-by: Alicia <115451386+congw729@users.noreply.github.com>

* [CI] Add pytest markers to current tests and update the doc. (#577)

Signed-off-by: Alicia <115451386+congw729@users.noreply.github.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* [Diffusion][Perf] Remove Redundant Communication Cost by Refining SP Hook Design (#1275)

Signed-off-by: mxuax <mxuax@connect.ust.hk>
Signed-off-by: XU Mingshi <91017482+mxuax@users.noreply.github.com>

* [Feature] Opt metrics structure (#891)

Signed-off-by: Junhong Liu <98734602+LJH-LBJ@users.noreply.github.com>
Signed-off-by: Junhong Liu <ljh_lbj@163.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* [Test] Add example test cases for omni online (#1086)

Signed-off-by: wangyu31577 <wangyu31577@hundsun.com>
Signed-off-by: Hongsheng Liu <liuhongsheng4@huawei.com>
Signed-off-by: yenuo26 <410167048@qq.com>
Co-authored-by: wangyu31577 <wangyu31577@hundsun.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* [CI] Reduce the time for Diffusion Sequence Parallelism Test (#1283)

Signed-off-by: Alicia <115451386+congw729@users.noreply.github.com>

* [Model] SupportHunyuanImage3 Diffusion Model in vllm-omni (#1085)

Signed-off-by: Semmer2 <semmer@live.cn>

* [Chore] Update copyright year. (#1256)

Signed-off-by: lishunyang <lishunyang12@163.com>

* [feature]: support Flux.1-dev CFG-Parallel (#1269)

* [Bugfix] Fix 'NoneType' AttributeError in stable-diffusion model detect (#1254)

Signed-off-by: Yan Ma <yan.ma@intel.com>

* [Doc] Update Qwen3-TTS docs for consistency with Omni examples (#1226)

Signed-off-by: linyueqian <linyueqian@outlook.com>
Signed-off-by: Yueqian Lin <70319226+linyueqian@users.noreply.github.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* [Fix]Ensure HuggingFace downloads complete before initialization. (#1213)

Signed-off-by: zhou zhuoxin <zhouzhuoxin1508@outlook.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* [BugFix] Fixed the issue where ignore_eos was not working. (#1286)

Signed-off-by: amy-why-3459 <wuhaiyan17@huawei.com>

* [Test] Add e2e tests for Qwen3-TTS speech endpoint (#1206)

Signed-off-by: linyueqian <linyueqian@outlook.com>
Signed-off-by: Yueqian Lin <70319226+linyueqian@users.noreply.github.com>

* [Feat]: support VAE patch parallelism (#756)

Signed-off-by: dongbo910220 <1275604947@qq.com>
Co-authored-by: hsliuustc0106 <liuhongsheng4@huawei.com>

* [CI] Disable Qwen3-TTS E2E Test in pipeline.yml (#1306)

Signed-off-by: Gao Han <hgaoaf@connect.ust.hk>

* [Misc] Add per-request generator_device to online image gen and edit (#1183)

Signed-off-by: gcanlin <canlinguosdu@gmail.com>

* [Bagel]: Support TP (#1293)

Signed-off-by: princepride <wangzhipeng628@gmail.com>

* [Bugfix] Fix image edit RoPE crash when explicit height/width are provided (#1265)

Signed-off-by: lishunyang <lishunyang12@163.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* [Doc] Sync (#1216)

Signed-off-by: Alicia <115451386+congw729@users.noreply.github.com>

* [Bugfix] fix precision issues of qwen3-omni when enable async_chunk without system prompt (#1288)

Signed-off-by: Rein Yang <ruiruyang2@gmail.com>

* [Debug] Add trigger to concurrent stage init (#1274)

Signed-off-by: tzhouam <tzhouam@connect.ust.hk>

* [Bugfix][Qwen3-TTS] Fix task type (#1317)

Signed-off-by: Ekagra Ranjan <3116519+ekagra-ranjan@users.noreply.github.com>

* Unifying CLI Argument Naming Style (#1309)

Signed-off-by: Didan Deng <33117903+wtomin@users.noreply.github.com>

* [Bugfix][Qwen3-TTS] Preserve original model ID in omni_snapshot_download (#1318)

* [CI] Run nightly tests. (#1333)

Signed-off-by: Alicia <115451386+congw729@users.noreply.github.com>

* [Feature]: FP8 Quantization Support for DiT  (#1034)

Signed-off-by: lishunyang <lishunyang12@163.com>
Signed-off-by: SYLAR <125541396+lishunyang12@users.noreply.github.com>

* Fix yield token metrics and opt metrics record stats (#1292)

* [Test] L2 & L3 Test Case Stratification Design for Omni Model (#1272)

Signed-off-by: wangyu31577 <wangyu31577@hundsun.com>
Signed-off-by: yenuo26 <410167048@qq.com>
Signed-off-by: wangyu <53896905+yenuo26@users.noreply.github.com>
Signed-off-by: Hongsheng Liu <liuhongsheng4@huawei.com>
Co-authored-by: wangyu31577 <wangyu31577@hundsun.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* [Pref] Support Qwen3 Omni code2wav batch infernce with async chunk (#1246)

Signed-off-by: ZeldaHuang <hzm414167@alibaba-inc.com>
Signed-off-by: Hongsheng Liu <liuhongsheng4@huawei.com>
Signed-off-by: Ziming Huang <1520787127@qq.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* update qwen3-omni & qwen2.5-onmi openai client (#1304)

Signed-off-by: Rein Yang <ruiruyang2@gmail.com>

* [Feature] Support Wan2.2 T2V and I2V Online Serving with OpenAI /v1/videos API (#1073)

Signed-off-by: samithuang <285365963@qq.com>
Signed-off-by: Samit <285365963@qq.com>
Signed-off-by: SamitHuang <285365963@qq.com>
Co-authored-by: Flora Feng <4florafeng@gmail.com>

* [Feature] add Tensor Parallelism to SD_3.5 (#1336)

Signed-off-by: GG-li <3226868735@qq.com>

* [Feature]async scheduling to overlap chunk IO and compute (#951)

Signed-off-by: CHEN <116010019@link.cuhk.edu.cn>
Co-authored-by: Bhanu068 <voutharoja.bhanu06@gmail.com>
Co-authored-by: Gao Han <gaohan19@huawei.com>

* [Bugfix] reused metrics to modify the API Server token statistics in Stream Response (#1301)

Signed-off-by: John Liu BUAA <liukecheng97@gmail.com>

* Refactor CPU Offloading Backend Pattern (#1223)

Signed-off-by: yuanheng <jonathan.zhaoyh@gmail.com>
Signed-off-by: Yuanheng Zhao <jonathan.zhaoyh@gmail.com>
Signed-off-by: Samit <285365963@qq.com>
Co-authored-by: Samit <285365963@qq.com>

* [DOC] Doc for CI test - Details about five level stucture and some other files. (#1167)

Signed-off-by: Alicia <115451386+congw729@users.noreply.github.com>
Co-authored-by: yenuo26 <410167048@qq.com>

* [Bugfix] remove Tongyi-MAI/Z-Image-Turbo related test from L2 ci (#1348)

Signed-off-by: dengyunyang <584797741@qq.com>

* [Misc] wechat image update (#1354)

Signed-off-by: David Chen <530634352@qq.com>

* [Misc] Support WorkerWrapperBase and CustomPipeline for Diffusion Worker (#764)

Signed-off-by: knlnguyen1802 <knlnguyen1802@gmail.com>

* [Feature][Bugfix] Add CFG feature to Bagel (#1310)

Signed-off-by: Ding Zuhao <e1583181@u.nus.edu>
Signed-off-by: 汪志鹏 <wangzhipeng628@gmail.com>
Co-authored-by: 汪志鹏 <wangzhipeng628@gmail.com>

* [Feature]: Diffusion sleep to use process level memory calculation (#1276)

Signed-off-by: Divyansh Singhvi <divyanshsinghvi@gmail.com>
Signed-off-by: 汪志鹏 <wangzhipeng628@gmail.com>
Signed-off-by: dsinghvi <divyanshsinghvi@gmail.com>
Co-authored-by: 汪志鹏 <wangzhipeng628@gmail.com>

* change qwen3-omni open cudagraph by default (#1352)

Signed-off-by: Rein Yang <ruiruyang2@gmail.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* [XPU] Update Bagel's flash_attn_varlen_func to fa utils (#1295)

Signed-off-by: zhenwei-intel <zhenwei.liu@intel.com>

* [Test] Add Omni Model Performance Benchmark Test (#1321)

Signed-off-by: yenuo26 <410167048@qq.com>
Signed-off-by: wangyu <53896905+yenuo26@users.noreply.github.com>

* [BugFix]: Revert utils change (#1369)

Signed-off-by: princepride <wangzhipeng628@gmail.com>

* [Rebase] Rebase to vllm v0.16.0 (#1357)

Signed-off-by: Taichang Zhou <tzhouam@connect.ust.hk>
Signed-off-by: tzhouam <tzhouam@connect.ust.hk>
Signed-off-by: princepride <wangzhipeng628@gmail.com>
Co-authored-by: 汪志鹏 <wangzhipeng628@gmail.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>
Co-authored-by: Isotr0py <Isotr0py@outlook.com>
Co-authored-by: ZJY0516 <zhu.jiangyun@foxmail.com>
Co-authored-by: youkaichao <youkaichao@gmail.com>

* [Test] Fix expansion and example test case for qwen3-omni (#1358)

Signed-off-by: yenuo26 <410167048@qq.com>

* [v0.16.0][BUG FIX]Fix hunyuan MOE after update to 0.16.0 (#1401)

Signed-off-by: Chendi Xue <chendi.xue@intel.com>

* [0.16.0] remove cuda hard-code for Hunyuan Image3 (#1402)

Signed-off-by: Chendi Xue <chendi.xue@intel.com>

* [XPU] Add XPU Dockerfile and related docs (#1162)

Signed-off-by: Yan Ma <yan.ma@intel.com>
Signed-off-by: Daniel Huang <daniel1.huang@intel.com>
Co-authored-by: Daniel Huang <daniel1.huang@intel.com>

* [Bugfix] Fix Hardcoded Datatypes in Z-image (#1393)

Signed-off-by: Alex Brooks <albrooks@redhat.com>

* [Feature] : Support disaggregated inference pipeline for Qwen3_TTS (#1161)

Signed-off-by: Sy03 <1370724210@qq.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* [Feature] Add automated PR reviewer bot with GLM integration (#1424)

Signed-off-by: hsliu <liuhongsheng4@huawei.com>
Signed-off-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>

* [Misc] Add Qwen2.5-Omni-3B model support to Gradio demo (#1382)

Signed-off-by: UsamaKenway <usamakenway@gmail.com>

* [misc] Feature/pr reviewer auto trigger&update model (#1431)

Signed-off-by: hsliu <liuhongsheng4@huawei.com>
Signed-off-by: Claude Opus 4.6 <noreply@anthropic.com>
Signed-off-by: Hunter Liu <hunter@liu.sh>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>

* Revert "[misc] Feature/pr reviewer auto trigger&update model" (#1432)

* [Doc] Update GPU installation commands (#1434)

* [ROCM] [CI] fix dockerfile.rocm to support nightly build and also fix amd ci v0.16.0rc1 (#1380)

Signed-off-by: tjtanaa <tunjian.tan@embeddedllm.com>

* [Feature][BAGEL] Combine multi-branch cfg into a single batch to accelerate inference. (#1429)

Signed-off-by: Ding Zuhao <e1583181@u.nus.edu>
Co-authored-by: 汪志鹏 <wangzhipeng628@gmail.com>

* [Feat]: add ASCII art logo for vLLM-Omni  (#1430)

* [Bug] [Bagel] Fix kv transfer bug (#1437)

Signed-off-by: Ding Zuhao <e1583181@u.nus.edu>
Co-authored-by: Wang Zhipeng: princepride <wangzhipeng628@gmail.com>

* [CI] Set L2 & L3 tests running conditions. (#1344)

Signed-off-by: Alicia <115451386+congw729@users.noreply.github.com>

* [Feature] vLLM-Omni RDMA connector (#1019)

Signed-off-by: natureofnature <wzliu@connect.hku.hk>

* [Minor][Refactor] Pass seq_token_counts explicitly (#1425)

Signed-off-by: gcanlin <canlinguosdu@gmail.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* [Misc] Extend Diffusion Benchmark script to other backends (#875)

Signed-off-by: NickLucche <nlucches@redhat.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* [Feature] Support Stage Based Deployment CLI (#939)

Signed-off-by: wuhang <wuhang6@huawei.com>
Signed-off-by: princepride <wangzhipeng628@gmail.com>
Signed-off-by: wuhang <whlbx@hotmail.com>
Co-authored-by: 汪志鹏 <wangzhipeng628@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* [Doc] Optimize vLLM-Omni metrics documentation (#1311)

Signed-off-by: Junhong Liu <98734602+LJH-LBJ@users.noreply.github.com>
Signed-off-by: Junhong Liu <ljh_lbj@163.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* [Bugfix]  Forward all vllm-omni serve command parameters to model (#985)

Signed-off-by: Junhong Liu <98734602+LJH-LBJ@users.noreply.github.com>
Signed-off-by: Junhong Liu <ljh_lbj@163.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* [Doc]: Add bagel single/multi node usage with mooncake document (#1450)

* [Qwen3TTS][Feat] Code2Wav batched decoding (#1426)

Signed-off-by: pablo <pablo@agigo.ai>
Co-authored-by: pablo <pablo@agigo.ai>

* [CI] Remove overwhelming debug log (#1463)

Signed-off-by: tzhouam <tzhouam@connect.ust.hk>

* [Misc] update wechat image (#1464)

Signed-off-by: David Chen <530634352@qq.com>

* [Doc] Refine Diffusion Tutorial Documents (#1305)

Signed-off-by: Didan Deng <33117903+wtomin@users.noreply.github.com>

* [Bugfix] Robust Audio Data Handling in _create_audio_choice (#1222)

Signed-off-by: Junhong Liu <98734602+LJH-LBJ@users.noreply.github.com>

* [Bugfix]: Fix merging updated additional information to ensure dict type (#1296)

Signed-off-by: Shijin Zhang <75300765+Dovis01@users.noreply.github.com>

* [Model]Add new nextstep_1(Diffusion) model(only T2I) (#612)

Signed-off-by: Dong Wang <dongw2019@gmail.com>
Signed-off-by: sniper35 <dongw2019@gmail.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* [Bugfix] Add TTS configuration options (#1177)

Signed-off-by: Yanick Schraner <yanick.schraner@bs.ch>

* [Debug] Multi-Request for Qwen 3 Omni use_audio_in_video (#1433)

Signed-off-by: tzhouam <tzhouam@connect.ust.hk>

* [Bugfix] Fix case-sensitive task_type matching in Qwen3TTSModelForGeneration (#1455)

Signed-off-by: Sangchun Ha <seomk9896@gmail.com>

* [BugFix] process request.num_cached_tokens if it equals to the initial value  (#1468)

Signed-off-by: Junhong Liu <98734602+LJH-LBJ@users.noreply.github.com>
Co-authored-by: Gao Han <gaohan19@huawei.com>

* [Bugfix] Fix SDPA attention mask dtype and shape (Fix #857) (#1349)

Signed-off-by: jader <yjader@foxmail.com>

* [Test] Reduce Perf test case and fix modify stage config (#1449)

Signed-off-by: yenuo26 <410167048@qq.com>

* [NPU] Upgrade to v0.16.0 (#1375)

Signed-off-by: gcanlin <canlinguosdu@gmail.com>

* [CI] Update Dockerfile for vllm-omni CI image and remove obsolete dep… (#1491)

Signed-off-by: tzhouam <tzhouam@connect.ust.hk>

* [Fix][Chore] Qwen3-TTS Modeling Minor Code Sanity Improvements (#1482)

Signed-off-by: yuanheng <jonathan.zhaoyh@gmail.com>

* [Bugfix] Fix tuple/list KV cache extraction crash (#1405)

Signed-off-by: junuxyz <216036880+junuxyz@users.noreply.github.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* [Doc] format lora related docs for the user's end (#1009)

Signed-off-by: AndyZhou952 <jzhoubc@connect.ust.hk>
Signed-off-by: Andy Zhou <46011930+AndyZhou952@users.noreply.github.com>

* [Feature] Support Wan2.2 output with irregular shapes (#1279)

Signed-off-by: gcanlin <canlinguosdu@gmail.com>

* [Misc] Migrate L1 tests to use pytest-mock (#1315)

Signed-off-by: Yuanheng Zhao <jonathan.zhaoyh@gmail.com>
Signed-off-by: yuanheng <jonathan.zhaoyh@gmail.com>

* [Bugfix] Fix LoRA Scaling on Active Adapters (#1421)

Signed-off-by: Alex Brooks <albrooks@redhat.com>

* [Bugfix] fix record audio generated frame in offline infer (#1312)

Signed-off-by: Junhong Liu <98734602+LJH-LBJ@users.noreply.github.com>
Signed-off-by: Junhong Liu <ljh_lbj@163.com>

* [Model] Support OmniGen2 (#513)

Signed-off-by: Yupu <feng.yu.pu0330@gmail.com>

* [Bugfix][Qwen3TTS] (#1289)

Signed-off-by: pablo <juanz9312@gmail.com>
Co-authored-by: Gao Han <gaohan19@huawei.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* Use pull through cache image for H100 pool (#1518)

Signed-off-by: Kevin H. Luu <khluu000@gmail.com>

* [ROCm] [CI] [Docker] Point to use the latest vLLM v0.16.0 stable version (#1500)

Signed-off-by: tjtanaa <tunjian.tan@embeddedllm.com>

* [Bugfix] fix offline text_to_image error from #1009 (#1515)

Signed-off-by: David Chen <530634352@qq.com>

* [XPU] Enable FLASH_ATTN on XPU (#1332)

Signed-off-by: Yan Ma <yan.ma@intel.com>

* Revert gpu_1 job to use regular image (#1521)

Signed-off-by: Kevin H. Luu <khluu000@gmail.com>

* [Chore] remove unused logger in omni_diffusion (#531) (#1509)

Signed-off-by: Huang, Zeyu <11222265+fhfuih@users.noreply.github.com>
Co-authored-by: Gao Han <gaohan19@huawei.com>

* [Qwen3TTS][Feat] Streaming output (#1438)

Signed-off-by: pablo <pablo@agigo.ai>
Signed-off-by: Hongsheng Liu <liuhongsheng4@huawei.com>
Co-authored-by: pablo <pablo@agigo.ai>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* [Bugfix] Race condition in MultiprocExecutor when concurent access to Scheduler (#1448)

Signed-off-by: knlnguyen1802 <knlnguyen1802@gmail.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* [Doc][Test][Misc] ComfyUI test, more screenshot, and code cleaning (#1435)

Signed-off-by: Huang, Zeyu <11222265+fhfuih@users.noreply.github.com>
Signed-off-by: Samit <285365963@qq.com>
Co-authored-by: Samit <285365963@qq.com>

* [Performance]Qwen3-Omni performance optimization (#1378)

Signed-off-by: amy-why-3459 <wuhaiyan17@huawei.com>

* [Feature] Support HSDP for diffusion models (#1339)

Signed-off-by: gcanlin <canlinguosdu@gmail.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* [CI] fixed CI timeout (#1460)

Signed-off-by: zhumingjue <zhumingjue@huawei.com>
Signed-off-by: zhumingjue138 <zhumingjue@huawei.com>

* [Bugfix] Use uds for zmq address if not set --stage-id (#1522)

Signed-off-by: wuhang <wuhang6@huawei.com>

* [BugFix] Restore talker's config (#1524)

Signed-off-by: amy-why-3459 <wuhaiyan17@huawei.com>
Signed-off-by: Hongsheng Liu <liuhongsheng4@huawei.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>
Co-authored-by: Canlin Guo <961750412@qq.com>

* [XPU] fix qwen_omni after rebase to v0.16.0 (#1416)

Signed-off-by: Chendi Xue <chendi.xue@intel.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* [Platform] Enable layerwise offload on all hardware (#1492)

Signed-off-by: gcanlin <canlinguosdu@gmail.com>

* diffusion: enable VAE patch parallel for SD3.5 (#1428)

Signed-off-by: dongbo910220 <1275604947@qq.com>

* [Perf] GLM Image (#920)

Signed-off-by: JaredforReal <w13431838023@gmail.com>
Signed-off-by: Jared Wen <w13431838023@gmail.com>
Signed-off-by: Hongsheng Liu <liuhongsheng4@huawei.com>
Co-authored-by: 汪志鹏 <wangzhipeng628@gmail.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* [skip ci][Doc] add design docs for async chunk in qwen3-omni (#962)

Signed-off-by: Rein Yang <ruiruyang2@gmail.com>

* feat(qwen3-tts): Add CUDA Graph support for speech tokenizer decoder (#1205)

Signed-off-by: xulusjb <fdukeshik@gmail.com>
Co-authored-by: xulusjb <fdukeshik@gmail.com>

* [New Model]: XiaomiMiMo/MiMo-Audio-7B-Instruct support (#750)

Signed-off-by: wangyu31577 <wangyu31577@hundsun.com>
Signed-off-by: 齐保元 <qibaoyuan@xiaomi.com>
Signed-off-by: hsliu <liuhongsheng4@huawei.com>
Signed-off-by: Hongsheng Liu <liuhongsheng4@huawei.com>
Signed-off-by: GG-li <3226868735@qq.com>
Signed-off-by: Sihao Li <111170255+GG-li@users.noreply.github.com>
Signed-off-by: XU Mingshi <91017482+mxuax@users.noreply.github.com>
Signed-off-by: mxuax <mxuax@connect.ust.hk>
Signed-off-by: Baoyuan Qi <qibaoyuan@126.com>
Signed-off-by: gcanlin <canlinguosdu@gmail.com>
Signed-off-by: Isotr0py <mozf@mail2.sysu.edu.cn>
Signed-off-by: wuzhongjian <wuzhongjian_yewu@cmss.chinamobile.com>
Signed-off-by: dongbo910220 <1275604947@qq.com>
Signed-off-by: dongbo910220 <32610838+dongbo910220@users.noreply.github.com>
Signed-off-by: Jiangyun Zhu <riverclouds.zhu@qq.com>
Signed-off-by: Junhong Liu <98734602+LJH-LBJ@users.noreply.github.com>
Signed-off-by: tjtanaa <tunjian.tan@embeddedllm.com>
Signed-off-by: baoyuan qi <qibaoyuan@126.com>
Signed-off-by: tzhouam <tzhouam@connect.ust.hk>
Signed-off-by: zjy0516 <riverclouds.zhu@qq.com>
Signed-off-by: Prajwal A <prajwalanagani@gmail.com>
Signed-off-by: Shijin Zhang <75300765+Dovis01@users.noreply.github.com>
Signed-off-by: 丁宁 <nndding@gmail.com>
Signed-off-by: SHIJIN ZHANG <75300765+Dovis01@users.noreply.github.com>
Signed-off-by: dingning<dingning7@xiaomi.com>
Signed-off-by: dingning <dingning7@xiaomi.com>
Signed-off-by: dingning <dingning@xiaomi.com>
Co-authored-by: wangyu <53896905+yenuo26@users.noreply.github.com>
Co-authored-by: wangyu31577 <wangyu31577@hundsun.com>
Co-authored-by: Zhang Shijin <zhangshijin@xiaomi.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>
Co-authored-by: Sihao Li <111170255+GG-li@users.noreply.github.com>
Co-authored-by: XU Mingshi <91017482+mxuax@users.noreply.github.com>
Co-authored-by: Canlin Guo <canlinguosdu@gmail.com>
Co-authored-by: Isotr0py <mozf@mail2.sysu.edu.cn>
Co-authored-by: JohnJan <wuzhongjian_yewu@cmss.chinamobile.com>
Co-authored-by: WeiQing Chen <40507679+david6666666@users.noreply.github.com>
Co-authored-by: dongbo910220 <32610838+dongbo910220@users.noreply.github.com>
Co-authored-by: Jiangyun Zhu <riverclouds.zhu@qq.com>
Co-authored-by: Junhong Liu <ljh_lbj@163.com>
Co-authored-by: TJian <tunjian.tan@embeddedllm.com>
Co-authored-by: shijin zhang <zsj1364226740@gmail.com>
Co-authored-by: Zhou Taichang <tzhouam@connect.ust.hk>
Co-authored-by: root <root@hk01dgx028.cm.cluster>
Co-authored-by: Prajwal A <34590600+LawJarp-A@users.noreply.github.com>
Co-authored-by: Shijin Zhang <75300765+Dovis01@users.noreply.github.com>
Co-authored-by: dingning <dingning7@xiaomi.com>
Co-authored-by: ning ding <nndding@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* [Feature]: Native GGUF Quantization Support for DiT (#1285)

Signed-off-by: David Chen <530634352@qq.com>
Signed-off-by: Isotr0py <mozf@mail2.sysu.edu.cn>
Signed-off-by: WeiQing Chen <40507679+david6666666@users.noreply.github.com>
Signed-off-by: Hongsheng Liu <liuhongsheng4@huawei.com>
Co-authored-by: Isotr0py <mozf@mail2.sysu.edu.cn>
Co-authored-by: Isotr0py <2037008807@qq.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* Add benchmark for `v1/audio/speech` non-streaming (#1408)

Signed-off-by: Ekagra Ranjan <3116519+ekagra-ranjan@users.noreply.github.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

* [Version] Auto generate version using `setuptool_scm` (#1224)

Signed-off-by: tjtanaa <tunjian.tan@embeddedllm.com>

* [Feat] : Support Async chunk cleanup (#1087)

Signed-off-by: Sy03 <1370724210@qq.com>

* [Profiler] Support online profiling (#1136)

Signed-off-by: gcanlin <canlinguosdu@gmail.com>
Signed-off-by: Canlin Guo <961750412@qq.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>
Co-authored-by: Nicolò Lucchesi <nicolo.lucchesi@gmail.com>

* [Bugfix] Fix redundant finished req status updating on OmniGenerationScheduler (#1510)

Signed-off-by: shijin zhang <75300765+Dovis01@users.noreply.github.com>
Co-authored-by: 齐保元 <qibaoyuan@xiaomi.com>

* [XPU][NPU][ROCM] enable cpu_offloading flag for non_cuda (#1488)

Signed-off-by: gcanlin <canlinguosdu@gmail.com>
Signed-off-by: Chendi Xue <chendi.xue@intel.com>
Co-authored-by: gcanlin <canlinguosdu@gmail.com>

* [Chore] Cleanup dead code in GGUF DiT code path (#1533)

Signed-off-by: Isotr0py <mozf@mail2.sysu.edu.cn>

* [Doc] Update installation instructions for vllm 0.16.0 (#1505)

Signed-off-by: tzhouam <tzhouam@connect.ust.hk>

* [Doc] [skip ci]Sync. (#1363)

Signed-off-by: Alicia <115451386+congw729@users.noreply.github.com>
Co-authored-by: Yueqian Lin <70319226+linyueqian@users.noreply.github.com>

* [CI][skip ci]Update H100 image link based on #1518 (#1538)

Signed-off-by: Alicia <115451386+congw729@users.noreply.github.com>

* Fix no embed text spk tokens (#1540)

Signed-off-by: Junhong Liu <98734602+LJH-LBJ@users.noreply.github.com>

* [Debug] Merge vllm pull 35368 (#1534)

Signed-off-by: tzhouam <tzhouam@connect.ust.hk>

* [Docs] update async chunk docs diagram [skip ci] (#1530)

Signed-off-by: Rein Yang <ruiruyang2@gmail.com>

* fix(qwen3-tts): fix Base ICL voice clone producing corrupted audio (#1554)

Signed-off-by: linyueqian <linyueqian@outlook.com>

* [NPU][Bugfix] Align GPU side and recover qwen3-tts (#1564)

Signed-off-by: gcanlin <canlinguosdu@gmail.com>

* [BugFix] Fix unexpected crash when init OmniDiffusion (#1562)

Signed-off-by: Semmer2 <semmer@live.cn>

* [CI] Modify some CI test cases to run on L4 environment to reduce H100 resource usage. (#1543)

Signed-off-by: yenuo26 <410167048@qq.com>
Signed-off-by: wangyu <53896905+yenuo26@users.noreply.github.com>

* [BugFix]: fix a lot of bug (#1565)

Signed-off-by: princepride <wangzhipeng628@gmail.com>

* feat: add HyperCLOVAX-SEED-Omni-8B support

Model files:
- vllm_omni/diffusion/models/hyperclovax_vision/: vision decoder pipeline
  (HyperCLOVAXVisionPipeline) using flow matching diffusion + VisionTransformer
- vllm_omni/diffusion/models/hyperclovax_audio/: audio decoder pipeline
  (HyperCLOVAXAudioPipeline) using Unit-BigVGAN codec
- vllm_omni/model_executor/stage_input_processors/hyperclovax_seed_omni.py:
  thinker2vision_decoder and thinker2audio_decoder — extract discrete tokens from
  LLM output; truncate/pad vision codes to 729 (27x27) for decoder

Registry:
- vllm_omni/diffusion/registry.py: register HyperCLOVAXVisionPipeline and
  HyperCLOVAXAudioPipeline with post-process functions

Stage config:
- vllm_omni/model_executor/stage_configs/hcx_omni.yaml: 3-stage config
  Stage 0: LLM thinker (TP=4, GPUs 0-3), Stage 1: vision decoder (GPU 4),
  Stage 2: audio decoder (GPU 5)

Bug fixes for HyperCLOVAX compatibility:
- diffusion/request.py: add extra dict field to OmniDiffusionRequest so
  vision_tokens/audio_tokens from stage input processors reach the pipeline
- entrypoints/async_omni_diffusion.py: extract OmniTokensPrompt.additional_information
  into OmniDiffusionRequest.extra before creating request
- entrypoints/omni_stage.py: skip empty engine inputs (text-only requests where
  thinker2vision_decoder/thinker2audio_decoder return [])
- entrypoints/async_omni.py: handle skipped sentinel in _process_single_result
  so text-only requests complete without crashing on Stage 1/2

* fix: correct decoder params and HCX porting fixes

- hcx_omni.yaml: guidance_scale 3.5→0.75, num_inference_steps 30→50
  (matches OmniServe production defaults; 3.5 caused over-amplified
  autoguidance → shrunken/degraded output images)
- omni_stage.py: skip empty engine inputs for text-only requests
- async_omni_diffusion.py: extract OmniTokensPrompt.additional_information
  into OmniDiffusionRequest.extra (audio_tokens/vision_tokens)
- registry.py: HCX Omni diffusion model registration fix

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* feat: HyperCLOVAX-SEED-Omni-8B stage pipeline and entrypoint fixes

* fix: change guidance_scale from 9.0 to 0.75 (autoguidance scale, OmniServe default)

* feat: add audio decoder Stage 2 to hcx_omni pipeline

- Wire HyperCLOVAXAudioPipeline as Stage 2 in hcx_omni.yaml
- GPU 5 assigned for audio decoder (Unit-BigVGAN / NCCosybigvganDecoder)
- Add runtime edge 0->2 (thinker -> audio decoder)
- Implement post-generation PCM chunk streaming for audio output
  (4800 samples / 200ms per SSE event @ 24kHz, int16 base64-encoded)

Refs: github.com/vllm-project/vllm-omni/pull/869 (already incorporated)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix: vllm version compatibility for HyperCLOVAX audio decoder startup

- config/model.py: try/except fallback for AttentionBackendEnum import
  (vllm.v1.attention.backends.registry absent in older vllm builds)
- pipeline_hyperclovax_audio.py: return actual named_parameters() from
  load_weights() when using MAR checkpoint so diffusers_loader strict
  check passes (weights loaded eagerly in __init__ via MAR extraction)
- qwen3_omni_moe_thinker.py, qwen2_5_omni_thinker.py: try/except stubs
  for check_interleaved_audio_video and merge_interleaved_embeddings
  which are absent in older vllm qwen2_5_omni_thinker; these symbols
  are only exercised by Qwen models, not HyperCLOVAX

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix: add edge 1→2 and correct model key in hcx_omni.yaml Stage 2

- Add runtime edge from:1 to:2 (required for Stage-2 connector init;
  without it AsyncOrchestrator cannot route to audio decoder at runtime)
- Change model_subdir to model for Stage-2 engine_args to match
  total-poc working reference config

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix: audio S2S output - handle diffusion outputs in _create_audio_choice

HyperCLOVAXAudioPipeline (diffusion) stores audio in multimodal_output
directly (OmniRequestOutput.from_diffusion), not in outputs[0].multimodal_output
like LLM pipelines. Fix three locations:

1. _create_audio_choice (non-streaming): use omni_outputs.multimodal_output
   when final_res.outputs is empty (diffusion path).
2. Streaming audio path: same fix for _final_res.outputs[0].
3. Both loops (for output in final_res.outputs): fall back to single
   synthetic choice at index 0 when outputs list is empty.
4. Handle bytes audio output from HyperCLOVAXAudioPipeline post-process
   (returns WAV bytes, not tensors like Qwen3-Omni).

Also fixes audio input (A2T) regression: skip diffusion prompt extraction
when mm_data has audio content (added in previous session).

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix: parse WAV bytes with soundfile for uniform PCM chunk streaming

HyperCLOVAXAudioPipeline returns WAV bytes including 44-byte header.
The previous byte-offset splitting included the header in the first
chunk, corrupting it. Fix: parse with soundfile to get float32 PCM,
then convert to int16 chunks uniformly regardless of source type
(bytes or tensor).

Verified: 136 audio chunks x 200ms = 27.04s audio streamed correctly.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* feat: zero-shot TTS with speaker embedding from input audio

- serving_chat.py: extract last input_audio base64 from request messages
  and inject as ref_audio_b64 into engine_prompt dict
- thinker2audio_decoder: read ref_audio_b64 from prompt and pass as
  ref_audio_tokens to Stage 2 (HyperCLOVAXAudioPipeline)
- hcx_omni.yaml: switch Stage 2 to NCZSCosybigvganDecoder.mar (zero-shot)
  which uses ECAPA-TDNN speaker encoder instead of finetuned ID lookup

Pipeline: input audio -> ECAPA-TDNN -> speaker embedding -> BigVGAN synthesis
matching the voice characteristics of the original speaker.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* feat: wire audio decoder Stage 2 to hcx_omni pipeline and fix S2S flow

- Add Stage 2 (HyperCLOVAXAudioPipeline / NCZSCosybigvganDecoder) to hcx_omni.yaml
  with GPU 5, gpu_memory_utilization 0.4, edge 0->2 from thinker
- Fix thinker2audio_decoder: correct audio token range (128606-135167),
  remap to [0, 6561) for BigVGAN input, handle empty token case gracefully
- Fix pipeline_hyperclovax_audio.py post_process_func signature and
  incorporate PR#869 BUG FIX patches for stable audio generation

* fix: use finetuned audio decoder and fix transformers_modules deserialization

- hcx_omni.yaml: switch Stage 2 from NCZSCosybigvganDecoder (zero-shot,
  ECAPA-TDNN) to NCCosybigvganDecoder (finetuned, nn.Embedding speaker id).
  Zero-shot decoder required ref_audio (mel spectrogram) which is unavailable
  for text-only requests and incompatible with finetuned decoder path.

- pipeline_hyperclovax_audio.py: guard ref_audio processing with
  'not self.bigvgan.finetune' — finetuned decoder has no ECAPA-TDNN encoder,
  so passing ref_audio bytes would crash with 'expected 100 channels'.

- omni_stage.py: add HuggingFace modules cache (~/.cache/huggingface/modules)
  to sys.path before queue.get_nowait() in try_collect(). Stage-0 pickles
  outputs containing custom classes from transformers_modules (trust_remote_code),
  but the API server process doesn't have this path, causing deserialization
  failures that silently drop Stage-0 outputs.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix: restore zero-shot speaker cloning with fallback for text-only requests

- hcx_omni.yaml: revert to NCZSCosybigvganDecoder.mar (zero-shot ECAPA-TDNN)
  for voice-preserving S2S synthesis. NCCosybigvganDecoder used a fixed
  integer speaker_id and lost the input speaker's voice.

- pipeline_hyperclovax_audio.py: add zero-mel fallback branch for
  finetune=False + ref_audio=None case. When a text-only request arrives
  (no input audio → no ref_audio), ECAPA-TDNN receives a zero mel tensor
  [1, num_mels, 64] instead of crashing with 'expected 100 channels'.
  S2S requests always have ref_audio so the zero-shot cloning path is
  unchanged.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* feat: add stage config yaml for HCX audio decoder

Signed-off-by: Hyunjoon Jeong <hyunjoon.jeong@navercorp.com>

* feat: add HyperCLOVAX-SEED-Omni 8B model as vllm-omni executor

Signed-off-by: Hyunjoon Jeong <hyunjoon.jeong@navercorp.com>

* feat: add HCX audio decoder pipeline

Signed-off-by: Hyunjoon Jeong <hyunjoon.jeong@navercorp.com>

* fix: modify exception for HCX audio decoder (GAN)

Signed-off-by: Hyunjoon Jeong <hyunjoon.jeong@navercorp.com>

* fix: default temperature set to 0, and pipeline model evaluation mode

Signed-off-by: Hyunjoon Jeong <hyunjoon.jeong@navercorp.com>

---------

Signed-off-by: Huang, Zeyu <11222265+fhfuih@users.noreply.github.com>
Signed-off-by: dengyunyang <584797741@qq.com>
Signed-off-by: gcanlin <canlinguosdu@gmail.com>
Signed-off-by: samithuang <285365963@qq.com>
Signed-off-by: Samit <285365963@qq.com>
Signed-off-by: lishunyang <lishunyang12@163.com>
Signed-off-by: Hongsheng Liu <liuhongsheng4@huawei.com>
Signed-off-by: princepride <wangzhipeng628@gmail.com>
Signed-off-by: 汪志鹏 <wangzhipeng628@gmail.com>
Signed-off-by: wangyu31577 <wangyu31577@hundsun.com>
Signed-off-by: Kyle Huang <yellowsea@gmail.com>
Signed-off-by: zjy0516 <riverclouds.zhu@qq.com>
Signed-off-by: tjtanaa <tunjian.tan@embeddedllm.com>
Signed-off-by: natureofnature <wzliu@connect.hku.hk>
Signed-off-by: linyueqian <linyueqian@outlook.com>
Signed-off-by: mxuax <mxuax@connect.ust.hk>
Signed-off-by: XU Mingshi <91017482+mxuax@users.noreply.github.com>
Signed-off-by: amy-why-3459 <wuhaiyan17@huawei.com>
Signed-off-by: wuzhongjian wuzhongjian_yewu@cmss.chinamobile.com
Signed-off-by: ZeldaHuang <hzm414167@alibaba-inc.com>
Signed-off-by: Divyansh Singhvi <divyanshsinghvi@gmail.com>
Signed-off-by: Lin, Fanli <fanli.lin@intel.com>
Signed-off-by: Fanli Lin <fanli.lin@intel.com>
Signed-off-by: Fanli Lin <fanli0116@gmail.com>
Signed-off-by: dongbo910220 <1275604947@qq.com>
Signed-off-by: Ding Zuhao <e1583181@u.nus.edu>
Signed-off-by: jzz <e1583181@u.nus.edu>
Signed-off-by: Andy Zhou <46011930+AndyZhou952@users.noreply.github.com>
Signed-off-by: wangyu <53896905+yenuo26@users.noreply.github.com>
Signed-off-by: Pierre Le Guen <26087574+PierreLeGuen@users.noreply.github.com>
Signed-off-by: yuanheng <jonathan.zhaoyh@gmail.com>
Signed-off-by: ram16g <anlianfengjie@163.com>
Signed-off-by: Didan Deng <33117903+wtomin@users.noreply.github.com>
Signed-off-by: pablo <juanz9312@gmail.com>
Signed-off-by: Roger Wang <hey@rogerw.io>
Signed-off-by: anna <lee.anna@navercorp.com>
Signed-off-by: Rustam Khadipash <16683750+hadipash@users.noreply.github.com>
Signed-off-by: Alicia <115451386+congw729@users.noreply.github.com>
Signed-off-by: Taichang Zhou <tzhouam@connect.ust.hk>
Signed-off-by: tzhouam <tzhouam@connect.ust.hk>
Signed-off-by: hsliu <liuhongsheng4@huawei.com>
Signed-off-by: hsliu_ustc <hsliu_ustc@noreply.gitcode.com>
Signed-off-by: zhenwei-intel <zhenwei.liu@intel.com>
Signed-off-by: Yuanheng Zhao <jonathan.zhaoyh@gmail.com>
Signed-off-by: erfgss <97771661+erfgss@users.noreply.github.com>
Signed-off-by: xiedeyantu <czjourney@163.com>
Signed-off-by: Junhong Liu <98734602+LJH-LBJ@users.noreply.github.com>
Signed-off-by: Junhong Liu <ljh_lbj@163.com>
Signed-off-by: David Chen <530634352@qq.com>
Signed-off-by: weichen <calvin_zhu0210@outlook.com>
Signed-off-by: Yan Ma <yan.ma@intel.com>
Signed-off-by: ApsarasX <apsarax@outlook.com>
Signed-off-by: Chenguang ZHENG <645327136@qq.com>
Signed-off-by: yenuo26 <410167048@qq.com>
Signed-off-by: Semmer2 <semmer@live.cn>
Signed-off-by: Yueqian Lin <70319226+linyueqian@users.noreply.github.com>
Signed-off-by: zhou zhuoxin <zhouzhuoxin1508@outlook.com>
Signed-off-by: Gao Han <hgaoaf@connect.ust.hk>
Signed-off-by: Rein Yang <ruiruyang2@gmail.com>
Signed-off-by: Ekagra Ranjan <3116519+ekagra-ranjan@users.noreply.github.com>
Signed-off-by: SYLAR <125541396+lishunyang12@users.noreply.github.com>
Signed-off-by: Ziming Huang <1520787127@qq.com>
Signed-off-by: SamitHuang <285365963@qq.com>
Signed-off-by: GG-li <3226868735@qq.com>
Signed-off-by: CHEN <116010019@link.cuhk.edu.cn>
Signed-off-by: John Liu BUAA <liukecheng97@gmail.com>
Signed-off-by: knlnguyen1802 <knlnguyen1802@gmail.com>
Signed-off-by: dsinghvi <divyanshsinghvi@gmail.com>
Signed-off-by: Chendi Xue <chendi.xue@intel.com>
Signed-off-by: Daniel Huang <daniel1.huang@intel.com>
Signed-off-by: Alex Brooks <albrooks@redhat.com>
Signed-off-by: Sy03 <1370724210@qq.com>
Signed-off-by: Claude Opus 4.6 <noreply@anthropic.com>
Signed-off-by: UsamaKenway <usamakenway@gmail.com>
Signed-off-by: Hunter Liu <hunter@liu.sh>
Signed-off-by: NickLucche <nlucches@redhat.com>
Signed-off-by: wuhang <wuhang6@huawei.com>
Signed-off-by: wuhang <whlbx@hotmail.com>
Signed-off-by: pablo <pablo@agigo.ai>
Signed-off-by: Shijin Zhang <75300765+Dovis01@users.noreply.github.com>
Signed-off-by: Dong Wang <dongw2019@gmail.com>
Signed-off-by: sniper35 <dongw2019@gmail.com>
Signed-off-by: Yanick Schraner <yanick.schraner@bs.ch>
Signed-off-by: Sangchun Ha <seomk9896@gmail.com>
Signed-off-by: jader <yjader@foxmail.com>
Signed-off-by: junuxyz <216036880+junuxyz@users.noreply.github.com>
Signed-off-by: AndyZhou952 <jzhoubc@connect.ust.hk>
Signed-off-by: Yupu <feng.yu.pu0330@gmail.com>
Signed-off-by: Kevin H. Luu <khluu000@gmail.com>
Signed-off-by: zhumingjue <zhumingjue@huawei.com>
Signed-off-by: zhumingjue138 <zhumingjue@huawei.com>
Signed-off-by: JaredforReal <w13431838023@gmail.com>
Signed-off-by: Jared Wen <w13431838023@gmail.com>
Signed-off-by: xulusjb <fdukeshik@gmail.com>
Signed-off-by: 齐保元 <qibaoyuan@xiaomi.com>
Signed-off-by: Sihao Li <111170255+GG-li@users.noreply.github.com>
Signed-off-by: Baoyuan Qi <qibaoyuan@126.com>
Signed-off-by: Isotr0py <mozf@mail2.sysu.edu.cn>
Signed-off-by: wuzhongjian <wuzhongjian_yewu@cmss.chinamobile.com>
Signed-off-by: dongbo910220 <32610838+dongbo910220@users.noreply.github.com>
Signed-off-by: Jiangyun Zhu <riverclouds.zhu@qq.com>
Signed-off-by: baoyuan qi <qibaoyuan@126.com>
Signed-off-by: Prajwal A <prajwalanagani@gmail.com>
Signed-off-by: 丁宁 <nndding@gmail.com>
Signed-off-by: SHIJIN ZHANG <75300765+Dovis01@users.noreply.github.com>
Signed-off-by: dingning<dingning7@xiaomi.com>
Signed-off-by: dingning <dingning7@xiaomi.com>
Signed-off-by: dingning <dingning@xiaomi.com>
Signed-off-by: WeiQing Chen <40507679+david6666666@users.noreply.github.com>
Signed-off-by: Canlin Guo <961750412@qq.com>
Signed-off-by: shijin zhang <75300765+Dovis01@users.noreply.github.com>
Signed-off-by: Hyunjoon Jeong <hyunjoon.jeong@navercorp.com>
Signed-off-by: Hyunjoon Jeong <with1015@unist.ac.kr>
Co-authored-by: Zeyu Huang | 黃澤宇 <11222265+fhfuih@users.noreply.github.com>
Co-authored-by: JohnJan <wuzhongjian_yewu@cmss.chinamobile.com>
Co-authored-by: dengyunyang <584797741@qq.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>
Co-authored-by: Canlin Guo <canlinguosdu@gmail.com>
Co-authored-by: Samit <285365963@qq.com>
Co-authored-by: SYLAR <125541396+lishunyang12@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: 汪志鹏 <wangzhipeng628@gmail.com>
Co-authored-by: wangyu <53896905+yenuo26@users.noreply.github.com>
Co-authored-by: wangyu31577 <wangyu31577@hundsun.com>
Co-authored-by: kYLe <yellowsea@gmail.com>
Co-authored-by: Jiangyun Zhu <riverclouds.zhu@qq.com>
Co-authored-by: TJian <tunjian.tan@embeddedllm.com>
Co-authored-by: NATURE <wzliu@connect.hku.hk>
Co-authored-by: Yueqian Lin <70319226+linyueqian@users.noreply.github.com>
Co-authored-by: Zhou Taichang <tzhouam@connect.ust.hk>
Co-authored-by: root <root@hk01dgx028.cm.cluster>
Co-authored-by: XU Mingshi <91017482+mxuax@users.noreply.github.com>
Co-authored-by: amy-why-3459 <wuhaiyan17@huawei.com>
Co-authored-by: Rein Yang <ruiruyang2@gmail.com>
Co-authored-by: Ziming Huang <hzm414167@alibaba-inc.com>
Co-authored-by: dsinghvi <divyanshsinghvi@gmail.com>
Co-authored-by: Fanli Lin <fanli.lin@intel.com>
Co-authored-by: dongbo910220 <32610838+dongbo910220@users.noreply.github.com>
Co-authored-by: Ding Zuhao <e1583181@u.nus.edu>
Co-authored-by: Andy Zhou <46011930+AndyZhou952@users.noreply.github.com>
Co-authored-by: Pierre LE GUEN <26087574+PierreLeGuen@users.noreply.github.com>
Co-authored-by: WeiQing Chen <40507679+david6666666@users.noreply.github.com>
Co-authored-by: Yuanheng Zhao <54058983+yuanheng-zhao@users.noreply.github.com>
Co-authored-by: ram16g <anlianfengjie@163.com>
Co-authored-by: Didan Deng <33117903+wtomin@users.noreply.github.com>
Co-authored-by: Markus / Mark <46672778+marksverdhei@users.noreply.github.com>
Co-authored-by: Juan Pablo Zuluaga <46724788+JuanPZuluaga@users.noreply.github.com>
Co-authored-by: muziyuhui666 <111362884+muziyuhui666@users.noreply.github.com>
Co-authored-by: Roger Wang <hey@rogerw.io>
Co-authored-by: ceanna93 <fairyanna@naver.com>
Co-authored-by: anna <lee.anna@navercorp.com>
Co-authored-by: Rustam Khadipash <16683750+hadipash@users.noreply.github.com>
Co-authored-by: Alicia <115451386+congw729@users.noreply.github.com>
Co-authored-by: hsliu_ustc <hsliu_ustc@noreply.gitcode.com>
Co-authored-by: liuzhenwei <zhenweiliu@habana.ai>
Co-authored-by: erfgss <97771661+erfgss@users.noreply.github.com>
Co-authored-by: Jensen <czjourney@163.com>
Co-authored-by: Junhong Liu <ljh_lbj@163.com>
Co-authored-by: weichen <calvin_zhu0210@outlook.com>
Co-authored-by: PopSoda2002 <zhouhp.me@gmail.com>
Co-authored-by: Yan Ma <yan.ma@intel.com>
Co-authored-by: ApsarasX <apsarax@outlook.com>
Co-authored-by: Chenguang Zheng <645327136@qq.com>
Co-authored-by: Jiaping Wu <53215702+ElleElleWu@users.noreply.github.com>
Co-authored-by: zhou zhuoxin <zhouzhuoxin1508@outlook.com>
Co-authored-by: Gao Han <gaohan19@huawei.com>
Co-authored-by: rein yang <73573651+R2-Y@users.noreply.github.com>
Co-authored-by: Ekagra Ranjan <3116519+ekagra-ranjan@users.noreply.github.com>
Co-authored-by: Flora Feng <4florafeng@gmail.com>
Co-authored-by: Sihao Li <111170255+GG-li@users.noreply.github.com>
Co-authored-by: ChenWenjing <54166744+Shirley125@users.noreply.github.com>
Co-authored-by: Bhanu068 <voutharoja.bhanu06@gmail.com>
Co-authored-by: John Liu BUAA <liukecheng97@gmail.com>
Co-authored-by: yenuo26 <410167048@qq.com>
Co-authored-by: knlnguyen1802 <knlnguyen1802@gmail.com>
Co-authored-by: liuzhenwei <zhenwei.liu@intel.com>
Co-authored-by: Isotr0py <Isotr0py@outlook.com>
Co-authored-by: ZJY0516 <zhu.jiangyun@foxmail.com>
Co-authored-by: youkaichao <youkaichao@gmail.com>
Co-authored-by: Chendi.Xue <chendi.xue@intel.com>
Co-authored-by: Daniel Huang <daniel1.huang@intel.com>
Co-authored-by: Alex Brooks <albrooks@redhat.com>
Co-authored-by: Sy03 <1370724210@qq.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: UsamaKenway <56207634+UsamaKenway@users.noreply.github.com>
Co-authored-by: Nicolò Lucchesi <nlucches@redhat.com>
Co-authored-by: wuhang <wuhang6@huawei.com>
Co-authored-by: pablo <pablo@agigo.ai>
Co-authored-by: SHIJIN ZHANG <75300765+Dovis01@users.noreply.github.com>
Co-authored-by: Dong W <89223086+sniper35@users.noreply.github.com>
Co-authored-by: Yanick Schraner <yanick.schraner@gmail.com>
Co-authored-by: Sangchun Ha <seomk9896@naver.com>
Co-authored-by: 亦瑾 <76905040+yJader@users.noreply.github.com>
Co-authored-by: junuxyz <216036880+junuxyz@users.noreply.github.com>
Co-authored-by: Yupu <feng.yu.pu0330@gmail.com>
Co-authored-by: Kevin H. Luu <khluu000@gmail.com>
Co-authored-by: zhumingjue138 <zhumingjue@huawei.com>
Co-authored-by: Canlin Guo <961750412@qq.com>
Co-authored-by: Jared Wen <w13431838023@gmail.com>
Co-authored-by: Xu Lu <572605156@qq.com>
Co-authored-by: xulusjb <fdukeshik@gmail.com>
Co-authored-by: Baoyuan Qi <qibaoyuan@xiaomi.com>
Co-authored-by: Zhang Shijin <zhangshijin@xiaomi.com>
Co-authored-by: Isotr0py <mozf@mail2.sysu.edu.cn>
Co-authored-by: shijin zhang <zsj1364226740@gmail.com>
Co-authored-by: Prajwal A <34590600+LawJarp-A@users.noreply.github.com>
Co-authored-by: dingning <dingning7@xiaomi.com>
Co-authored-by: ning ding <nndding@gmail.com>
Co-authored-by: Isotr0py <2037008807@qq.com>
Co-authored-by: Nicolò Lucchesi <nicolo.lucchesi@gmail.com>
Co-authored-by: Ting FU <futing10@huawei.com>
Co-authored-by: developer-account <irteam@vllm-omni-dev-0.vllm-omni-dev.p-nb13557.svc.cluster.local>
Co-authored-by: Hyunjoon Jeong <hyunjoon.jeong@navercorp.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

high priority high priority issue, needs to be done asap ready label to trigger buildkite CI

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants