Skip to content

[diffusion] kernel: apply fused kernel#14437

Closed
yingluosanqian wants to merge 8 commits intosgl-project:mainfrom
yingluosanqian:python_fused_kernel
Closed

[diffusion] kernel: apply fused kernel#14437
yingluosanqian wants to merge 8 commits intosgl-project:mainfrom
yingluosanqian:python_fused_kernel

Conversation

@yingluosanqian
Copy link
Copy Markdown
Collaborator

Related to #12799

Motivation

PR (#14289) implement the fused kernel, and this PR use it in related diffusion model.

See the kernel PR (#14289) for kernel details (accuracy, benchmark, profiling ....).

Modifications

  • Use fused kernel in wan and hunyuan.
  • Add a accuracy test for fused kernel.

Accuracy Tests

Benchmarking and Profiling

Checklist

  • use fused kernel to replace unfused kernel.

@github-actions github-actions bot added the diffusion SGLang Diffusion label Dec 4, 2025
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @yingluosanqian, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request integrates a recently developed fused kernel for the ScaleResidualNormScaleShift operation into the wan and hunyuan diffusion models. This refactoring aims to enhance computational efficiency by combining multiple sequential operations into a single, optimized kernel. To ensure the reliability of this performance improvement, a comprehensive accuracy test has been introduced, comparing the fused kernel's output against a native Python implementation.

Highlights

  • Fused Kernel Integration: The ScaleResidualLayerNormScaleShift operation has been refactored to use a new fused kernel, ScaleResidualNormScaleShift, for improved performance.
  • Model Updates: The wan and hunyuan diffusion models have been updated to utilize this new fused kernel.
  • Accuracy Testing: A dedicated accuracy test suite has been added to validate the correctness of the ScaleResidualNormScaleShift fused kernel across various configurations.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors the ScaleResidualLayerNormScaleShift class into ScaleResidualNormScaleShift, making it a CustomOp with distinct forward_cuda and forward_native implementations. The __init__ method is updated to include a bias parameter and streamline dtype handling. These changes are applied across several model files (causal_wanvideo.py, hunyuanvideo.py, wanvideo.py) where the class is instantiated, involving updates to parameter passing (e.g., removing compute_dtype and adding bias). A new test file (test_scale_residual_norm_scale_shift.py) is introduced to thoroughly validate the accuracy of the refactored class against its native implementation across various input shapes and data types. Review comments identified a critical variable renaming error in wanvideo.py that would cause an AttributeError, an inconsistency in handling integer gate values between forward_cuda and forward_native in layernorm.py, a bug using scale.size instead of scale.numel() for tensor size checks, and a missing typing.Tuple import in the new test file.

@yingluosanqian yingluosanqian changed the title [diffusion] kernel: use fuse kernel [diffusion] kernel: apply fused kernel Dec 4, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

diffusion SGLang Diffusion

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant