Skip to content

feat: HyperCLOVAX-SEED-Omni-8B full pipeline (vision decoder + S2S/T2V E2E)#1

Merged
with1015 merged 4 commits intowith1015:model/hyperclovax-audiofrom
KilJaeeun:hcx-omni-pipeline-fixes
Apr 6, 2026
Merged

feat: HyperCLOVAX-SEED-Omni-8B full pipeline (vision decoder + S2S/T2V E2E)#1
with1015 merged 4 commits intowith1015:model/hyperclovax-audiofrom
KilJaeeun:hcx-omni-pipeline-fixes

Conversation

@KilJaeeun
Copy link
Copy Markdown
Collaborator

@KilJaeeun KilJaeeun commented Apr 6, 2026

Summary

Builds on vllm-project#869 (HyperCLOVAX audio decoder) to complete the full 3-stage HyperCLOVAX-SEED-Omni-8B pipeline with E2E validated Speech-to-Speech and Text-to-Vision.

Commits

1. feat: vision pipeline, thinker model, stage config (길재은)

  • diffusion/models/hyperclovax_vision/: HyperCLOVAX vision diffusion pipeline
  • model_executor/models/hcx_omni/: HCX Omni thinker model
  • model_executor/stage_configs/hcx_omni.yaml: 3-stage pipeline config
  • model_executor/stage_input_processors/hyperclovax_seed_omni.py: thinker→decoder token routing
  • examples/, tests/: client demo, e2e and unit tests

2. fix: async fan-out topology, serving pipeline, vLLM 0.18.0 compat

  • async_omni.py: redesign _process_sequential_results for fan-out topology — Stage-0 independently routes to Stage-1 (vision) AND Stage-2 (audio) via engine_input_source
  • serving_chat.py: add _stage0_is_llm guard to prevent GLM-Image path clobbering HCX Omni multimodal inputs
  • vLLM 0.18.0 API alignment across entrypoints and workers

3. fix: diffusion IPC, audio/vision decoder E2E fixes

  • diffusion/ipc.py, diffusion_engine.py, diffusion_worker.py: IPC stability for multi-stage diffusion
  • pipeline_hyperclovax_audio.py: finetuned decoder path, speaker embedding fallback
  • registry.py, request.py: HCX Omni diffusion registration

E2E Test Results

Scenario Input Output Result
Speech-to-Speech 440 Hz sine wave (1s, 16kHz WAV) 11.84s / 568KB WAV (BigVGAN, 24kHz) ✅ PASS
Text-to-Vision text prompt 768×768 PNG (diffusion, 50 steps) ✅ PASS

길재은 and others added 4 commits April 6, 2026 09:18
…e config

- diffusion/models/hyperclovax_vision/: HyperCLOVAX vision diffusion pipeline
  (transformer, layers, vision_token_embedder, pipeline)
- model_executor/models/hcx_omni/: HCX Omni thinker model
- model_executor/stage_configs/hcx_omni.yaml: 3-stage pipeline config
  (Stage-0 LLM thinker, Stage-1 vision decoder, Stage-2 audio decoder)
- model_executor/stage_input_processors/hyperclovax_seed_omni.py:
  thinker→vision/audio token routing
- engine/, entrypoints/: arg_utils, input_processor, omni_llm, zmq_utils,
  stage_utils, cli/main integration
- examples/online_serving/hcx_omni/: client demo and run script
- tests/: e2e and unit tests for HCX Omni

Co-Authored-By: Hyunjoon Jeong <with1015@unist.ac.kr>
- async_omni.py: redesign _process_sequential_results for fan-out topology
  — Stage-0 forwards to Stage-1 (vision) AND Stage-2 (audio) independently
  based on engine_input_source; add skipped_stages for conditional routing
- serving_chat.py: add _stage0_is_llm guard so GLM-Image bare-text
  replacement does not clobber HCX Omni Stage-0 multimodal inputs;
  handle audio output in _create_chat_completion_response
- async_omni_diffusion.py, omni_stage.py: vLLM 0.18.0 API alignment
- worker/gpu_ar_model_runner.py, async_omni_llm.py: compatibility fixes

Co-Authored-By: 길재은 <jaeeun.kil@navercorp.com>
Co-Authored-By: Hyunjoon Jeong <with1015@unist.ac.kr>
- diffusion/ipc.py, diffusion_engine.py, diffusion_worker.py:
  IPC stability and worker lifecycle fixes for HCX audio+vision stages
- diffusion/models/hyperclovax_audio/pipeline_hyperclovax_audio.py:
  finetuned audio decoder path, transformers_modules deserialization,
  zero-shot speaker embedding fallback
- diffusion/registry.py, request.py: HCX Omni diffusion model registration
  and request type handling

Validated E2E with HyperCLOVAX-SEED-Omni-8B:
  Speech-to-Speech → 11.84s / 568KB WAV (BigVGAN, 24kHz)
  Text-to-Vision   → 768×768 PNG (diffusion, 50 steps)

Co-Authored-By: 길재은 <jaeeun.kil@navercorp.com>
Co-Authored-By: Hyunjoon Jeong <with1015@unist.ac.kr>
- tests/unit/conftest.py: stub vllm_omni heavy init so unit tests can
  import stage_input_processors without a full vLLM installation
- vllm_omni/config/model.py: guard _RUNNER_TASKS / TaskOption imports
  with try/except fallback for vLLM 0.18.0 where these were removed

Co-Authored-By: 길재은 <jaeeun.kil@navercorp.com>
Co-Authored-By: Hyunjoon Jeong <with1015@unist.ac.kr>
@KilJaeeun
Copy link
Copy Markdown
Collaborator Author

Test Plan Results

Verified against a running 3-stage HyperCLOVAX-SEED-Omni-8B server (hcx_3stage.yaml, TP=2 on GPUs 4–5 + decoders on GPUs 6–7).

# Test Result
1 Server health (GET /health) ✅ HTTP 200
2 Speech-to-Speech (audio input → audio output) ✅ 568,364 bytes WAV
3 Text-to-Vision (text prompt → image output) ✅ 189,767 bytes PNG
4 Unit tests (pytest tests/unit/, 12 tests) ✅ 12/12 PASS (82% coverage)

S2S

Choice[0]: content='입력 모달리티와 사용자의 의도를 우선 고려하여 답변 형식을 정의합니다...' audio=False
Choice[1]: audio=True  →  568,364 bytes saved to /tmp/s2s_output.wav
[PASS] Speech-to-Speech

T2V

Total choices: 2
Image output: 189,767 bytes → saved to /tmp/t2v_output.png
[PASS] Text-to-Vision

Unit Tests

collected 12 items
tests/unit/model_executor/test_hcx_omni_processing.py ..........  [100%]

Name                                                                    Stmts   Miss  Cover
-------------------------------------------------------------------------------------------
vllm_omni/model_executor/stage_input_processors/hyperclovax_seed_omni   55     10    82%
-------------------------------------------------------------------------------------------
TOTAL                                                                     55     10    82%

12 passed in 0.08s

@with1015 with1015 merged commit 644dfed into with1015:model/hyperclovax-audio Apr 6, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants