Skip to content

[diffusion] Add cache-dit CI tests#19213

Merged
mickqian merged 11 commits intosgl-project:mainfrom
qimcis:cachedit-test
May 10, 2026
Merged

[diffusion] Add cache-dit CI tests#19213
mickqian merged 11 commits intosgl-project:mainfrom
qimcis:cachedit-test

Conversation

@qimcis
Copy link
Copy Markdown
Contributor

@qimcis qimcis commented Feb 24, 2026

Motivation

Add CI tests for changes in #16662 , and a CI test for cache-dit on native diffusion engine as well

Modifications

Add 1 gpu test, 2 gpu test and baselines

Checklist

@github-actions github-actions Bot added the diffusion SGLang Diffusion label Feb 24, 2026
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @qimcis, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request integrates new Continuous Integration (CI) tests for the cache-dit diffusers feature, covering both single and dual GPU setups. The primary goal is to ensure the stability and performance of this caching mechanism within the diffusers backend by establishing dedicated test cases and updating performance baselines. This enhancement improves the robustness of the multimodal generation server by validating critical configurations.

Highlights

  • New CI Tests: New CI tests for cache-dit diffusers functionality were introduced, covering both 1-GPU and 2-GPU configurations to ensure stability and performance.
  • Configuration Files Added: Dedicated YAML configuration files (cache_dit_config_1gpu.yaml, cache_dit_config_2gpu.yaml) were added to define specific cache-dit settings for the new tests.
  • Performance Baselines Updated: Performance baselines were updated to include expected metrics for the newly added cache-dit diffusers test cases.
  • Test Framework Extension: The test framework was extended to support passing diffusers_kwargs to the image generation client, allowing for more flexible testing of diffusers-specific parameters.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • python/sglang/multimodal_gen/test/server/cache_dit_config_1gpu.yaml
    • Added a new YAML configuration file for 1-GPU cache-dit settings.
  • python/sglang/multimodal_gen/test/server/cache_dit_config_2gpu.yaml
    • Added a new YAML configuration file for 2-GPU cache-dit settings, including parallelism parameters.
  • python/sglang/multimodal_gen/test/server/perf_baselines.json
    • Updated performance baselines by adding entries for new qwen_image_t2i_cache_dit_config_diffusers_1gpu and qwen_image_t2i_cache_dit_config_diffusers_2gpu test cases.
  • python/sglang/multimodal_gen/test/server/test_server_utils.py
    • Modified the generate_image function to pass diffusers_kwargs from sampling parameters to the image generation client.
  • python/sglang/multimodal_gen/test/server/testcase_configs.py
    • Added a diffusers_kwargs field to the DiffusionSamplingParams dataclass.
    • Introduced new DiffusionTestCase entries for qwen_image_t2i_cache_dit_config_diffusers_1gpu and qwen_image_t2i_cache_dit_config_diffusers_2gpu, configuring them with specific cache-dit and diffusers_kwargs.
Activity
  • The author initiated this pull request to address the need for CI tests related to cache-dit diffusers functionality, as motivated by changes in [diffusion] Enable Cache‑DiT config for diffusers backend #16662.
  • The author has added new 1-GPU and 2-GPU test configurations along with their corresponding performance baselines.
  • The provided checklist indicates that code formatting, unit tests, and documentation updates have been completed.
  • The task to follow SGLang code style guidance is still pending.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds CI tests for cache-dit with the diffusers backend, covering both 1-GPU and 2-GPU scenarios. The changes are well-structured, including new YAML configurations, performance baselines, and test cases. The modification to pass diffusers_kwargs is appropriate. I have a couple of suggestions to improve code clarity and reduce duplication in the test case configuration.

Comment on lines +746 to +750
DiffusionSamplingParams(
prompt="Doraemon is eating dorayaki.",
output_size="1024x1024",
diffusers_kwargs={"max_sequence_length": 512},
),
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

To avoid duplicating the prompt and output_size from T2I_sampling_params, you can use dataclasses.replace. This makes the code cleaner and ensures consistency with other test cases. It also makes it clear that this configuration is a variation of T2I_sampling_params.

        replace(T2I_sampling_params, diffusers_kwargs={"max_sequence_length": 512}),

@mickqian
Copy link
Copy Markdown
Collaborator

@DefTruth is this PR covering sufficient cases? could you provide some suggestions? cheers

@DefTruth
Copy link
Copy Markdown
Contributor

@DefTruth is this PR covering sufficient cases? could you provide some suggestions? cheers

@mickqian It would be better if some SCM-related tests could be added, e.g, cache_dit_scm_config.yaml.

@mickqian
Copy link
Copy Markdown
Collaborator

could you attach the output of this case?

@qimcis
Copy link
Copy Markdown
Contributor Author

qimcis commented Feb 25, 2026

output:
585abc22-97c5-48b8-aee0-db3718a84713

@DefTruth
Copy link
Copy Markdown
Contributor

DefTruth commented Mar 10, 2026

@qimcis @mickqian Any updates? I think it would be better if we could add some tests for the cache to help SGLD avoid errors related to cache (e.g., #19955, #19965) introduced by new commits.

@qimcis
Copy link
Copy Markdown
Contributor Author

qimcis commented Mar 10, 2026

@qimcis @mickqian Any updates? I think it would be better if we could add some tests for the cache to help SLGD avoid errors related to cache (e.g., #19955, #19965) introduced by new commits.

I can also help add more tests! Lmk if there are specific areas of coverage you think I should hit (besides the PR you mentioned already). Anything remaining I should address on this PR? @mickqian

@@ -0,0 +1,10 @@
cache_config:
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

gather to a configs folder?

@qimcis qimcis requested a review from mickqian March 10, 2026 05:09
@mickqian
Copy link
Copy Markdown
Collaborator

/tag-and-rerun-ci

@yhyang201
Copy link
Copy Markdown
Collaborator

/rerun-failed-ci

4 similar comments
@yhyang201
Copy link
Copy Markdown
Collaborator

/rerun-failed-ci

@yhyang201
Copy link
Copy Markdown
Collaborator

/rerun-failed-ci

@yhyang201
Copy link
Copy Markdown
Collaborator

/rerun-failed-ci

@yhyang201
Copy link
Copy Markdown
Collaborator

/rerun-failed-ci

@qimcis qimcis changed the title [diffusion] Add cache-dit diffusers CI tests [diffusion] Add cache-dit CI tests May 9, 2026
@qimcis
Copy link
Copy Markdown
Contributor Author

qimcis commented May 9, 2026

I updated this PR, instead of focusing only on cache-dit diffusers and the configs, I've also added a native SGLD 2-gpu sp-only cache-dit regression case, which uses the native SGLD cache-dit path with SP enabled and TP disabled, covering the class of issue from #19955 / #19965 around cache-dit parallelism config construction

In addition to this, i kept the 1-GPU diffusers cache-dit config coverage that i added previously (including one scm case)

Perf baselines have been regenerated onto current main too (2x h100 SXM)

Outputs for the 1gpu tests look like:

qwen_image_t2i_cache_dit_config_diffusers_1gpu

For the 2gpu test with wan, the output is not as important, as it's meant to measure simply that the execution path does not crash and produces a valid generation

Can you take a look to make sure things look ok before I get mick to review again? @DefTruth Thank you!

),
T2V_sampling_params,
),
DiffusionTestCase(
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

these tests are heavy, could we try to make it lightweight

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i removed a redundant test, and made some changes, is this ok? passes locally for me @mickqian

@mickqian
Copy link
Copy Markdown
Collaborator

mickqian commented May 9, 2026

CI is failing

@qimcis qimcis force-pushed the cachedit-test branch 2 times, most recently from 47dc984 to 6a5242c Compare May 9, 2026 16:33
@qimcis
Copy link
Copy Markdown
Contributor Author

qimcis commented May 9, 2026

/rerun-failed-ci

1 similar comment
@qimcis
Copy link
Copy Markdown
Contributor Author

qimcis commented May 9, 2026

/rerun-failed-ci

@qimcis
Copy link
Copy Markdown
Contributor Author

qimcis commented May 10, 2026

/rerun-failed-ci

1 similar comment
@qimcis
Copy link
Copy Markdown
Contributor Author

qimcis commented May 10, 2026

/rerun-failed-ci

@mickqian mickqian merged commit 44efc23 into sgl-project:main May 10, 2026
248 of 283 checks passed
ltcs11 added a commit to ltcs11/sglang that referenced this pull request May 11, 2026
* main: (87 commits)
  [Fix] Disable FlashInfer allreduce fusion under deterministic inference (sgl-project#24629)
  fix: STANDALONE spec-decode hidden-size mismatch crash (sgl-project#24217)
  Followup fix for Custom AR V2 in non NVL scenarios (sgl-project#24742)
  Fix reduce_scatterv producer contract for SUM_LEN (sgl-project#24785)
  [NPU]Documentation update for communications quantization feature (sgl-project#24668)
  [Session R3] Add routed_experts_start_len for absolute routing slice control (sgl-project#24851)
  [Model] Add MiniCPM-V 4.6 support (sgl-project#24855)
  Support Intern-S2-Preview (sgl-project#24875)
  [PD] Unify dsv4 dispatch with swa (sgl-project#24888)
  Optimize MHC pipeline: DeepGemm, fused norm, fused hc_head (sgl-project#24775)
  Fix PD bootstrap failure handling (sgl-project#24772)
  [Spec] Cleanup idle stub and shape-check patterns (sgl-project#24881)
  [Bug] Add dsv4 state_type branch to mooncake disaggregation (sgl-project#24878)
  [Spec V1] Split draft-extend phase from `EagleDraftInput` into new `EagleDraftExtendInput` (sgl-project#24859)
  [Gemma4] Optimize Gemm4 with fused Q/K/V RMSNorm + per-expert FP8 ckpt loader (sgl-project#24696)
  [spec decoding] support kimi-k2.5-eagle3-mla (sgl-project#24826)
  [SPEC V2] fix: skip stale state updates in spec-v2 overlap (sgl-project#23456)
  [RL] Call torch.cuda.empty_cache() for `in-place` pause mode to avoid OOM (sgl-project#24854)
  [diffusion] CI: add cache-dit CI tests (sgl-project#19213)
  [Utils] Make request dump robust to unpicklable server_args and large meta_info (sgl-project#24767)
  ...

# Conflicts:
#	python/sglang/srt/utils/common.py
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

diffusion SGLang Diffusion run-ci

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants