Skip to content

[Rework] Add SwapAB Optimization for triton fused_moe_kernel on SM90.#16723

Merged
Fridge003 merged 3 commits intosgl-project:mainfrom
Insideyyy:fused_moe_swap_ab
Jan 10, 2026
Merged

[Rework] Add SwapAB Optimization for triton fused_moe_kernel on SM90.#16723
Fridge003 merged 3 commits intosgl-project:mainfrom
Insideyyy:fused_moe_swap_ab

Conversation

@Insideyyy
Copy link
Copy Markdown
Contributor

Motivation

Rework of reverted pr #15712, with AMD CI failures fixed.

Modifications

Accuracy Tests

Benchmarking and Profiling

Checklist

Review Process

  1. Ping Merge Oncalls to start the PR flow. See the PR Merge Process.
  2. Get approvals from CODEOWNERS and other reviewers.
  3. Trigger CI tests with comments (/tag-run-ci-label, /rerun-failed-ci, /tag-and-rerun-ci) or contact authorized users to do so.
  4. After green CI and required approvals, ask Merge Oncalls to merge.

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @Insideyyy, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request re-introduces a performance optimization, dubbed 'SwapAB', for the fused Mixture-of-Experts (MoE) kernel within the SGLang runtime. The core change involves conditionally transposing input matrices during the kernel's computation to improve efficiency on specific NVIDIA SM90 GPUs, particularly the H20. This re-implementation addresses and resolves the AMD CI failures that led to the previous reversion of this optimization, ensuring stability while delivering targeted performance enhancements.

Highlights

  • Reintroduced SwapAB Optimization: This pull request re-implements the 'SwapAB' optimization for the Triton fused Mixture-of-Experts (MoE) kernel, which was previously reverted due to AMD CI failures. The current implementation includes fixes to prevent these failures.
  • Conditional Optimization Activation: A new utility function, should_enable_swap_ab, has been added to conditionally activate the SwapAB optimization. It is currently enabled only for specific NVIDIA SM90 GPUs (specifically H20, excluding H200) and under certain BLOCK_SIZE_M (< 64) and BLOCK_SIZE_N (>= 64) conditions, primarily when using FP8 weight-activation quantization.
  • Kernel Logic Modification: The fused_moe_kernel has been updated to accept a swap_ab flag. When enabled, it transposes the input matrices (A and B) before the dot product and transposes the final accumulator to maintain correct output dimensions, optimizing performance for specific hardware configurations.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces the SwapAB optimization for the Triton fused_moe_kernel on SM90 GPUs, specifically targeting H20 devices for now. The changes are well-contained and add the necessary logic to detect the target hardware and conditionally apply the optimization within the kernel. The implementation appears correct. I have a couple of suggestions to enhance code readability and reduce duplication, which will improve long-term maintainability.

Comment on lines +68 to +75
device_name = get_device_name()
is_h20_device = device_name and "H20" in device_name and "H200" not in device_name
return (
is_h20_device
and is_sm90_supported()
and BLOCK_SIZE_M < 64
and BLOCK_SIZE_N >= 64
)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The logic to determine if the optimization should be enabled can be made more direct and readable by combining the checks into a single return statement and using is not None for explicit None checking. This removes the intermediate is_h20_device variable and makes the function's intent clearer at a glance.

Suggested change
device_name = get_device_name()
is_h20_device = device_name and "H20" in device_name and "H200" not in device_name
return (
is_h20_device
and is_sm90_supported()
and BLOCK_SIZE_M < 64
and BLOCK_SIZE_N >= 64
)
device_name = get_device_name()
return (
device_name is not None
and "H20" in device_name
and "H200" not in device_name
and is_sm90_supported()
and BLOCK_SIZE_M < 64
and BLOCK_SIZE_N >= 64
)

@Insideyyy Insideyyy changed the title Add SwapAB Optimization for triton fused_moe_kernel on SM90. [Rework] Add SwapAB Optimization for triton fused_moe_kernel on SM90. Jan 8, 2026
@ClawSeven
Copy link
Copy Markdown
Collaborator

/tag-and-rerun-ci

@github-actions github-actions bot added the run-ci label Jan 8, 2026
@Fridge003
Copy link
Copy Markdown
Collaborator

/rerun-failed-ci

@Fridge003 Fridge003 merged commit 67b61a4 into sgl-project:main Jan 10, 2026
265 of 306 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants