Skip to content

Expand deep_gemm entrypoint to support more FP8 recipes.#17294

Draft
zianglih wants to merge 1 commit intosgl-project:mainfrom
zianglih:dg
Draft

Expand deep_gemm entrypoint to support more FP8 recipes.#17294
zianglih wants to merge 1 commit intosgl-project:mainfrom
zianglih:dg

Conversation

@zianglih
Copy link
Copy Markdown
Contributor

@zianglih zianglih commented Jan 18, 2026

Motivation

This is a preliminary step for DeepGEMM + MXFP8 integration. #17093
Currently, SGLang DeepGEMM entrypoint hardcodes (1, 128, 128) recipe with FP32 scaling factors.
This PR exposes recipe argument in entrypoint, allowing more general FP8 recipes.
For example, after deepseek-ai/DeepGEMM#280, MXFP8 can be handled by passing a (1, 1, 32)
recipe.

Modifications

Optionally pass recipe and sf_dtype to entrypoint.

Accuracy Tests

Benchmarking and Profiling

Checklist

Review Process

  1. Ping Merge Oncalls to start the PR flow. See the PR Merge Process.
  2. Get approvals from CODEOWNERS and other reviewers.
  3. Trigger CI tests with comments or contact authorized users to do so.
    • /tag-run-ci-label, /rerun-failed-ci, /tag-and-rerun-ci
  4. After green CI and required approvals, ask Merge Oncalls to merge.

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @zianglih, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the flexibility of DeepGEMM's FP8 operations by parameterizing the block sizes and scaling factor data types. This generalization moves away from a rigid, hardcoded configuration, enabling broader compatibility with various FP8 quantization schemes, most notably paving the way for seamless integration with MXFP8. The changes primarily involve modifying function signatures and internal tensor dimension calculations to adapt to these new configurable parameters.

Highlights

  • Expanded DeepGEMM FP8 Entrypoint: The DeepGEMM entrypoint functions (grouped_gemm_nt_f8f8bf16_masked, grouped_gemm_nt_f8f8bf16_contig, gemm_nt_f8f8bf16) now accept optional recipe (a tuple defining block sizes like (block_m, block_n, block_k)) and sf_dtype (scaling factor data type) parameters. This removes the previous hardcoded (1, 128, 128) recipe and FP32 scaling factor type.
  • Dynamic Scaling Factor Tensor Sizing: The internal utility functions (_empty_token_fp8, _empty_block_fp8) responsible for creating scaling factor tensors now dynamically calculate their dimensions based on the provided recipe and sf_dtype. This replaces a fixed block size of 128 with a configurable block_k or block_n derived from the recipe, and adjusts for different scaling factor storage requirements (e.g., torch.int vs torch.float32).
  • MXFP8 Integration Support: This change is a foundational step towards integrating MXFP8 (Mixed-Precision FP8) quantization, enabling DeepGEMM to handle more diverse FP8 configurations, such as a (1, 1, 32) recipe, which was previously not supported due to hardcoded parameters.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request expands the deep_gemm entrypoint to support more FP8 recipes by exposing recipe and sf_dtype parameters. This is a necessary step for MXFP8 integration. The changes are well-contained and correctly plumb the new parameters through the compilation and execution paths of deep_gemm. I've pointed out a minor inconsistency in the warmup executor classes where sf_dtype is not consistently stored. Addressing this would improve code maintainability.

@zianglih
Copy link
Copy Markdown
Contributor Author

/gemini review

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request expands the deep_gemm entrypoint to support more FP8 recipes by introducing recipe and sf_dtype parameters. The changes are well-contained and propagate these new parameters through the call stack, from the entrypoint functions down to the warmup executors and tensor creation helpers. This makes the implementation more flexible and prepares it for future integrations like MXFP8.

My review focuses on improving code clarity and maintainability. I've pointed out a few instances of unused variables and duplicated code blocks. Addressing these points will make the code cleaner and easier to maintain in the long run. Overall, this is a good step towards more general FP8 support.

Comment on lines +237 to +242
if sf_dtype is None or sf_dtype == torch.float32:
sf_storage_elements_per_scale = 1
elif sf_dtype == torch.int:
sf_storage_elements_per_scale = 4
else:
raise ValueError(f"Unimplemented sf_dtype: {sf_dtype}")
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This logic for handling sf_dtype is duplicated in _empty_block_fp8 (lines 263-268). To improve maintainability and reduce code duplication, consider extracting this logic into a shared helper function. This would also be a good place to make the handling of sf_dtype=None more explicit by defaulting it to torch.float32 at the beginning of the helper.

if recipe is None:
block_n = block_k = 128
else:
block_m, block_n, block_k = recipe
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The variable block_m is unpacked from recipe but is not used in this function. You can use _ to indicate that it is intentionally ignored to improve code clarity.

Suggested change
block_m, block_n, block_k = recipe
_, block_n, block_k = recipe

@zianglih
Copy link
Copy Markdown
Contributor Author

/gemini review

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request expands the deep_gemm entrypoint to support more FP8 recipes by passing recipe and sf_dtype through the call stack. This is a good step towards more flexible FP8 support. The changes are mostly correct, but I have identified a few areas for improvement. There is a high-severity issue with memory estimation for warmup, which could lead to OOM errors. The memory calculation needs to be updated to account for the new configurable parameters. Additionally, there is some code duplication that can be refactored for better maintainability, and a small improvement for code clarity by making an implicit default dtype explicit.

Comment on lines +237 to +242
if sf_dtype is None or sf_dtype == torch.float32:
sf_storage_elements_per_scale = 1
elif sf_dtype == torch.int:
sf_storage_elements_per_scale = 4
else:
raise ValueError(f"Unimplemented sf_dtype: {sf_dtype}")
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This logic for determining sf_storage_elements_per_scale is duplicated in _empty_block_fp8 (lines 263-268). To improve maintainability and reduce code duplication, consider extracting this logic into a separate helper function.

For example:

def _get_sf_storage_elements_per_scale(sf_dtype: Optional[torch.dtype]) -> int:
    if sf_dtype is None or sf_dtype == torch.float32:
        return 1
    if sf_dtype == torch.int:
        return 4
    raise ValueError(f"Unimplemented sf_dtype: {sf_dtype}")

You can then call this helper function in both _empty_token_fp8 and _empty_block_fp8.

(*dims, ceil_div(k, _BLOCK_SIZE)), device="cuda", dtype=torch.float32
(*dims, ceil_div(k, block_k * sf_storage_elements_per_scale)),
device="cuda",
dtype=sf_dtype,
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

While passing dtype=None to torch.empty defaults to torch.float32, making this explicit improves code clarity and robustness. It's better to explicitly handle the None case for sf_dtype. This also applies to _empty_block_fp8 on line 278.

Suggested change
dtype=sf_dtype,
dtype=sf_dtype or torch.float32,

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant