Skip to content

[diffusion] fix TeaCache silently fails with --enable-teacache#19964

Merged
mickqian merged 7 commits intosgl-project:mainfrom
eitanturok:teacache-fix
Mar 7, 2026
Merged

[diffusion] fix TeaCache silently fails with --enable-teacache#19964
mickqian merged 7 commits intosgl-project:mainfrom
eitanturok:teacache-fix

Conversation

@eitanturok
Copy link
Copy Markdown
Contributor

@eitanturok eitanturok commented Mar 5, 2026

Motivation

This PR fixes a bug where TeaCache was silently disabled even when --enable-teacache was specified.

The issue stemmed from Req.teacache_params defaulting to None, which prevented the attribute lookup from delegating to the actual values stored in sampling_params. This is a prerequisite for #19957.

The Problem

In cache/teacache.py, the cache is bypassed if forward_batch.teacache_params is None.

Currently, the Req class and sampling_params both have a teacache_params field.

  1. Model-specific configs (like WanT2V_1_3B_SamplingParams) correctly populate sampling_params.teacache_params.
  2. However, the Req object initializes its own teacache_params as None.
  3. Because Req uses __getattr__ for delegation, it only looks at sampling_params if the attribute does not exist on Req.
  4. Since Req.teacache_params exists (as None), the delegation never happens, and the cache logic assumes TeaCache is disabled.

The Solution

  • Removed the explicit teacache_params field from the Req class.
  • By removing the field, Req now correctly hits __getattr__ when teacache_params is accessed.
  • The lookup now successfully proxies to sampling_params.teacache_params, ensuring the model-specific configurations are respected.

Verification Results

With teacache enabled, Wan2.1-T2V-1.3B generates a video 1.7x faster on my branch but is the same speed on master when compared to the baseline generation time. This is run on an A100. Moreover, the video is lower quality on my branch (this is expected as we are skipping steps) but the exact same quality as the baseline on main.

Teacache (my branch)

sglang generate --model-path "Wan-AI/Wan2.1-T2V-1.3B-Diffusers" --prompt "Two anthropomorphic cats in comfy boxing gear and bright gloves fight intensely on a spotlighted stage." --num-inference-steps 50 --num-frames 81 --width 832 --height 480 --save-output --output-path outputs --output-file-name fight-teacache.mp4 --seed 42 --pin-cpu-memory --enable-teacache
fight-teacache-mybranch.mp4

[03-05 15:15:35] Completed batch processing. Generated 1 outputs in 86.46 seconds

Teacache (main)

sglang generate --model-path "Wan-AI/Wan2.1-T2V-1.3B-Diffusers" --prompt "Two anthropomorphic cats in comfy boxing gear and bright gloves fight intensely on a spotlighted stage." --num-inference-steps 50 --num-frames 81 --width 832 --height 480 --save-output --output-path outputs --output-file-name fight-teacache.mp4 --seed 42 --pin-cpu-memory --enable-teacache

[03-05 15:27:20] Completed batch processing. Generated 1 outputs in 150.74 seconds

fight-teacache-main.mp4

Baseline (main)

sglang generate --model-path "Wan-AI/Wan2.1-T2V-1.3B-Diffusers" --prompt "Two anthropomorphic cats in comfy boxing gear and bright gloves fight intensely on a spotlighted stage." --num-inference-steps 50 --num-frames 81 --width 832 --height 480 --save-output --output-path outputs --output-file-name fight-baseline.mp4 --seed 42 --pin-cpu-memory

[03-05 15:48:35] Completed batch processing. Generated 1 outputs in 150.62 seconds

fight-baseline-main.mp4

Checklist

Review Process

  1. Ping Merge Oncalls to start the PR flow. See the PR Merge Process.
  2. Get approvals from CODEOWNERS and other reviewers.
  3. Trigger CI tests with comments or contact authorized users to do so.
    • /tag-run-ci-label, /rerun-failed-ci, /tag-and-rerun-ci
  4. After green CI and required approvals, ask Merge Oncalls to merge.

@github-actions github-actions bot added the diffusion SGLang Diffusion label Mar 5, 2026
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a critical bug that prevented the TeaCache optimization from activating, even when explicitly enabled. By refining the Req object's attribute handling, the system now correctly delegates TeaCache parameters, leading to significant performance improvements for models utilizing this caching mechanism.

Highlights

  • TeaCache Bug Fix: Resolved an issue where TeaCache was silently disabled despite being enabled via --enable-teacache. The Req object's teacache_params field defaulting to None prevented correct delegation to sampling_params.
  • Delegation Mechanism: Modified the Req class by removing its explicit teacache_params field, ensuring that Python's getattr mechanism correctly delegates to sampling_params.teacache_params when accessed.
  • Performance Improvement: Verified the fix by demonstrating a 1.7x speedup for Wan2.1-T2V-1.3B when TeaCache is enabled, confirming the cache is now active and functioning as intended.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • python/sglang/multimodal_gen/runtime/pipelines_core/schedule_batch.py
    • Removed imports for TeaCacheParams and WanTeaCacheParams.
    • Removed the teacache_params attribute from the Req class.
Activity
  • Provided detailed verification results, including command-line outputs and speed comparisons, demonstrating the fix's effectiveness.
  • Confirmed adherence to code formatting standards by checking the relevant checklist item.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses a bug that caused TeaCache to be silently disabled. The issue stemmed from the Req class defining a teacache_params field, which was initialized to None. This prevented the __getattr__ method from delegating the attribute lookup to the sampling_params object where the actual configuration resides. The fix involves removing the teacache_params field from the Req class, which correctly restores the delegation behavior. The corresponding unused imports have also been removed. The change is well-targeted and effectively resolves the bug.

eitanturok added a commit to eitanturok/sglang that referenced this pull request Mar 5, 2026
@yhyang201
Copy link
Copy Markdown
Collaborator

/tag-and-rerun-ci

@github-actions github-actions bot added the run-ci label Mar 5, 2026
@eitanturok
Copy link
Copy Markdown
Contributor Author

eitanturok commented Mar 5, 2026

Should I update perf_baselines.json for better testing?

According to perf_baselines.json, the denoise_step_ms for wan2_1_t2v_1.3b_teacache_enabled remains nearly constant across all steps (~247ms per step). If TeaCache were successfully skipping steps, we would see significant variance in these timings, with 'skipped' steps appearing substantially faster than computation steps. This is further proof that the current results are incorrect.

Below is the results from benchmarking wan2_1_t2v_1.3b_teacache_enabled on this branch on an H100 in CI. We see that each full step is ~130 ms and each skipped step is~44 ms.

"wan2_1_t2v_1.3b_teacache_enabled": {
    "stages_ms": {
        "DenoisingStage": 4598.36,
        "InputValidationStage": 0.07,
        "DecodingStage": 552.92,
        "LatentPreparationStage": 0.26,
        "per_frame_generation": Infinity,
        "TextEncodingStage": 1114.01,
        "TimestepPreparationStage": 2.1
    },
    "denoise_step_ms": {
        "0": 94.24,
        "1": 172.68,
        "2": 169.48,
        "3": 169.08,
        "4": 168.38,
        "5": 167.27,
        "6": 62.95,
        "7": 119.56,
        "8": 53.34,
        "9": 121.85,
        "10": 47.64,
        "11": 125.75,
        "12": 3.24,
        "13": 48.21,
        "14": 125.17,
        "15": 3.71,
        "16": 48.15,
        "17": 124.61,
        "18": 3.3,
        "19": 47.25,
        "20": 129.33,
        "21": 3.11,
        "22": 48.03,
        "23": 127.46,
        "24": 3.37,
        "25": 45.6,
        "26": 127.17,
        "27": 3.35,
        "28": 49.83,
        "29": 125.42,
        "30": 3.19,
        "31": 42.76,
        "32": 131.19,
        "33": 2.93,
        "34": 130.04,
        "35": 44.77,
        "36": 131.45,
        "37": 44.06,
        "38": 131.02,
        "39": 43.48,
        "40": 130.42,
        "41": 45.24,
        "42": 129.46,
        "43": 44.6,
        "44": 130.33,
        "45": 173.84,
        "46": 175.58,
        "47": 168.16,
        "48": 173.85,
        "49": 177.56
    },
    "expected_e2e_ms": 6497.84,
    "expected_avg_denoise_ms": 91.85,
    "expected_median_denoise_ms": 120.7
},

@mickqian
Copy link
Copy Markdown
Collaborator

mickqian commented Mar 6, 2026

@eitanturok if the performance is theoretically affected by this PR, then yes. Otherwise no. cheers

@eitanturok
Copy link
Copy Markdown
Contributor Author

@mickqian I updated perf_baseline.json because this does effect performance.

Any other comments?

@yhyang201
Copy link
Copy Markdown
Collaborator

@mickqian Nvidia CI passed and PR is approved, ready for merge

— SGLDHelper bot

@mickqian mickqian merged commit 31e93e4 into sgl-project:main Mar 7, 2026
67 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

diffusion SGLang Diffusion run-ci

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants