Skip to content

[Perf] Tunings for SM100 FP8 CUTLASS kernel#8818

Merged
zhyncs merged 4 commits intosgl-project:mainfrom
hhzguo:henryg/fp8_cutlass_tuning
Aug 14, 2025
Merged

[Perf] Tunings for SM100 FP8 CUTLASS kernel#8818
zhyncs merged 4 commits intosgl-project:mainfrom
hhzguo:henryg/fp8_cutlass_tuning

Conversation

@hhzguo
Copy link
Contributor

@hhzguo hhzguo commented Aug 5, 2025

Motivation

Port vLLM's
vllm-project/vllm#18778 (comment)
vllm-project/vllm#19566 (comment)
https://github.com/vllm-project/vllm/pull/20071/files

to SGLang to improve performance of GEMM scaled matrix multiplication on NVIDIA Blackwell GPUS.
Make sgl-kernel/benchmark/ bench_fp8_gemm.py work for the above changes.

Modifications

This PR introduces optimizations for FP8 GEMM:

Added launch configuration tuning:
Add extra configs for [1, 16], (16, 64], (64, 256], (256, inf]
to replace the original all-in-one config for better CUDA MM performance.

Accuracy Test

We verified correctness and measured performance using:

Test Env:
NVIDIA B200 (Blackwell GPU)
SGLang: v0.4.9.post6
sgl-kernel: built from source, 0.2.5 based
CUDA version: 12.8

Test script:
python sgl-kernel/tests/test_fp8_gemm.py

Benchmark & Profiling

image

Checklist

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @hhzguo, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request focuses on significantly improving the performance of FP8 General Matrix Multiplication (GEMM) operations on NVIDIA Blackwell GPUs (SM100 architecture). It achieves this by porting and adapting performance tunings from vLLM, specifically by introducing a dynamic kernel dispatch mechanism that selects optimized CUTLASS kernel configurations based on the input matrix dimensions. The changes also include updates to the benchmarking infrastructure to validate these performance gains.

Highlights

  • Performance Tuning for Blackwell GPUs: This pull request introduces significant performance optimizations for FP8 General Matrix Multiplication (GEMM) operations specifically targeting NVIDIA Blackwell (SM100) GPUs. These tunings are adapted from vLLM's successful implementations.
  • Dynamic Kernel Dispatch: A key change is the implementation of a dynamic kernel dispatch mechanism. This allows the system to automatically select the most optimal CUTLASS kernel configuration (including CTA and Cluster shapes) at runtime, based on the M-dimension of the input matrix.
  • Expanded Launch Configurations: New, specialized CUDA launch configurations have been added to replace a single, general configuration. These new configurations are tailored for different ranges of the input matrix M-dimension: [1, 16], (16, 64], (64, 256], and (256, inf], ensuring better performance across various matrix sizes.
  • Benchmark Script Update: The bench_fp8_gemm.py script has been updated to properly benchmark SGLang's FP8 GEMM implementation, including its quantization step, and to facilitate direct performance comparisons with vLLM's implementation.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces performance tunings for FP8 GEMM kernels, adding specific launch configurations for different matrix sizes on SM100 architectures. The benchmark script has also been updated to support these changes. The main areas for improvement are around code duplication in both the C++ kernel and the Python benchmark script, which could be refactored for better long-term maintainability.

Comment on lines 1132 to 1158
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The logic within the if (bias) and else blocks is highly repetitive. Consider using a template or a function pointer to reduce code duplication and improve maintainability. This will make the code easier to understand and modify in the future.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Consider extracting the quantization and transpose operations outside the if/elif block to reduce code duplication. This improves maintainability by adhering to the DRY principle.

a_fp8, scale_a_fp8 = sglang_scaled_fp8_quant(a, scale_a)
b_fp8, scale_b_fp8 = sglang_scaled_fp8_quant(b, scale_b)
b_fp8 = b_fp8.t()

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The static keyword is unnecessary for a constexpr template function. Removing it will prevent potential linking issues if this header is included in multiple compilation units.

Suggested change
static inline constexpr auto div_ceil(A a, B b) {
inline constexpr auto div_ceil(A a, B b) {

@HydraQYH
Copy link
Collaborator

HydraQYH commented Aug 6, 2025

Great Job. Thanks for porting the kernel updates.

@HydraQYH
Copy link
Collaborator

HydraQYH commented Aug 6, 2025

@hhzguo hhzguo force-pushed the henryg/fp8_cutlass_tuning branch from 91e8e7f to 7ad1715 Compare August 8, 2025 21:43
@HydraQYH
Copy link
Collaborator

@hhzguo Hi, I saw you force pushed the code. The lint issue is resolved. Are there any other updates?

@hhzguo
Copy link
Contributor Author

hhzguo commented Aug 12, 2025

@hhzguo Hi, I saw you force pushed the code. The lint issue is resolved. Are there any other updates?

@HydraQYH That's great and thank you for reviewing the change. I don't have other updates for the moment.

@yicwang
Copy link
Contributor

yicwang commented Aug 13, 2025

Hi @HydraQYH. Have you got some time to review and have this merged?

@HydraQYH
Copy link
Collaborator

Hi @HydraQYH. Have you got some time to review and have this merged?

Will merge this PR after CI passed.

@zhyncs zhyncs merged commit 841810f into sgl-project:main Aug 14, 2025
43 of 61 checks passed
narutolhy pushed a commit to narutolhy/sglang that referenced this pull request Aug 17, 2025
MahmoudAshraf97 pushed a commit to MahmoudAshraf97/sglang that referenced this pull request Sep 8, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants