Skip to content

[sgl-kernel] support custom fp8 flashmla kernel#13087

Merged
Fridge003 merged 6 commits intosgl-project:mainfrom
FlamingoPg:flashmla-fp8
Nov 13, 2025
Merged

[sgl-kernel] support custom fp8 flashmla kernel#13087
Fridge003 merged 6 commits intosgl-project:mainfrom
FlamingoPg:flashmla-fp8

Conversation

@FlamingoPg
Copy link
Collaborator

Motivation

Add custom fp8 flashmla kernel, which is already used in sglang. But we not support it.
Stack PR: sgl-project/FlashMLA#1

Modifications

  • Add new fp8 kernel
  • Add fp8 kernel test

Accuracy Tests

Benchmarking and Profiling

Checklist

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @FlamingoPg, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces crucial support for custom FP8 FlashMLA kernels within the 'sgl-kernel' project. The primary goal is to leverage 8-bit floating-point precision for attention mechanisms, which can lead to significant memory savings and potential performance improvements on compatible hardware. The changes involve integrating a specialized FlashMLA branch, extending the C++ and Python APIs to handle FP8 data types, and adding robust unit tests to ensure the correctness of the new FP8 kernel.

Highlights

  • FP8 FlashMLA Kernel Integration: Integrated a custom FP8 (8-bit floating point) FlashMLA kernel by updating the FlashMLA submodule to a specific 'sgl-fp8' branch and including new 'dense_fp8' source files for the 'sm90' architecture.
  • New Kernel Functionality Exposure: Exposed new FP8-specific kernel functions, 'get_mla_decoding_metadata_dense_fp8' and 'fwd_kvcache_mla_fp8', through TorchScript bindings, making them accessible from Python.
  • Python API Adaptation: Modified the Python 'flash_mla' API to conditionally dispatch to the new FP8 kernel functions based on input data types (e.g., 'q.element_size() == 1' for FP8) and introduced 'descale_q' and 'descale_k' parameters for FP8 operations.
  • Comprehensive FP8 Testing: Added a new, parameterized test suite ('test_flash_mla_fp8') to thoroughly validate the functionality and accuracy of the FP8 FlashMLA kernel across various configurations, including comparisons against a reference implementation.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@FlamingoPg FlamingoPg self-assigned this Nov 11, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds support for a custom fp8 flashmla kernel by updating the FlashMLA dependency, adding new C++/CUDA source files, and creating the corresponding Python bindings. The changes look mostly correct, but I've identified a critical issue in the Python wrapper flash_mla_with_kvcache where new parameters are used without being added to the function signature, which will cause a runtime error. Additionally, there are a couple of issues in the tests: one existing test seems to be broken by the changes, and the new test for the fp8 kernel doesn't correctly exercise the new code path for metadata generation. I've also noted a minor copy-paste error in a C++ header comment that could cause confusion.

Comment on lines +941 to +942
const at::Tensor& kcache, // num_blocks x page_block_size x num_heads_k x head_size (when is_fp8 is False) or
// num_blocks x num_heads_k x (page_block_size*656) (when is_fp8 is True)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This comment appears to be a copy-paste from the fwd_kvcache_mla function declaration. Since fwd_kvcache_mla_fp8 is specific to FP8, the conditional parts of the comment are confusing and unnecessary. Please simplify the comment to describe only the FP8 layout.

Suggested change
const at::Tensor& kcache, // num_blocks x page_block_size x num_heads_k x head_size (when is_fp8 is False) or
// num_blocks x num_heads_k x (page_block_size*656) (when is_fp8 is True)
const at::Tensor& kcache, // num_blocks x num_heads_k x (page_block_size*656)

@whybeyoung
Copy link
Collaborator

LGTM

@HanHan009527
Copy link
Collaborator

Hi, could I know why we're putting FlashMLA into the sglang kernel, and where I can see the related plans?

@FlamingoPg
Copy link
Collaborator Author

FlamingoPg commented Nov 12, 2025

Hi, could I know why we're putting FlashMLA into the sglang kernel, and where I can see the related plans?

Hi, @HanHan009527 We didn’t directly put flashmla into the sgl kernel. You can see this in my stack PR. Our integration approach is the same as vLLM’s. The key reason is that compiling this kernel ourselves helps us maintain a stable sgl-kernel wheel. flashmla still uses pybind, which brings torch/cuda/python version constraints during integration.
As for integrating third-party kernels, we don’t have a broad plan for that at the moment. I’m currently handling the overall integration work myself.

@FlamingoPg
Copy link
Collaborator Author

Do you have any concerns about this PR, or are there any other integration-related questions I can answer for you?

@Fridge003
Copy link
Collaborator

@HanHan009527
Copy link
Collaborator

Hi, could I know why we're putting FlashMLA into the sglang kernel, and where I can see the related plans?

Hi, @HanHan009527 We didn’t directly put flashmla into the sgl kernel. You can see this in my stack PR. Our integration approach is the same as vLLM’s. The key reason is that compiling this kernel ourselves helps us maintain a stable sgl-kernel wheel. flashmla still uses pybind, which brings torch/cuda/python version constraints during integration. As for integrating third-party kernels, we don’t have a broad plan for that at the moment. I’m currently handling the overall integration work myself.

get it,thanks

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
@FlamingoPg
Copy link
Collaborator Author

sgl-kernel fixed.

@Fridge003
Copy link
Collaborator

Custom fp8 flashmla kernel is only covered in sgl-kernel test, so as long as this passes it will be OK
https://github.com/sgl-project/sglang/actions/runs/19327550680/job/55336255414?pr=13087

@Fridge003 Fridge003 merged commit 2966367 into sgl-project:main Nov 13, 2025
90 of 106 checks passed
HanHan009527 pushed a commit to bytedance-iaas/sglang that referenced this pull request Dec 1, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants