[Perf] Tunings for SM100 FP8 CUTLASS kernel#8818
Conversation
There was a problem hiding this comment.
Summary of Changes
Hello @hhzguo, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request focuses on significantly improving the performance of FP8 General Matrix Multiplication (GEMM) operations on NVIDIA Blackwell GPUs (SM100 architecture). It achieves this by porting and adapting performance tunings from vLLM, specifically by introducing a dynamic kernel dispatch mechanism that selects optimized CUTLASS kernel configurations based on the input matrix dimensions. The changes also include updates to the benchmarking infrastructure to validate these performance gains.
Highlights
- Performance Tuning for Blackwell GPUs: This pull request introduces significant performance optimizations for FP8 General Matrix Multiplication (GEMM) operations specifically targeting NVIDIA Blackwell (SM100) GPUs. These tunings are adapted from vLLM's successful implementations.
- Dynamic Kernel Dispatch: A key change is the implementation of a dynamic kernel dispatch mechanism. This allows the system to automatically select the most optimal CUTLASS kernel configuration (including CTA and Cluster shapes) at runtime, based on the M-dimension of the input matrix.
- Expanded Launch Configurations: New, specialized CUDA launch configurations have been added to replace a single, general configuration. These new configurations are tailored for different ranges of the input matrix M-dimension: [1, 16], (16, 64], (64, 256], and (256, inf], ensuring better performance across various matrix sizes.
- Benchmark Script Update: The
bench_fp8_gemm.pyscript has been updated to properly benchmark SGLang's FP8 GEMM implementation, including its quantization step, and to facilitate direct performance comparisons with vLLM's implementation.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Code Review
This pull request introduces performance tunings for FP8 GEMM kernels, adding specific launch configurations for different matrix sizes on SM100 architectures. The benchmark script has also been updated to support these changes. The main areas for improvement are around code duplication in both the C++ kernel and the Python benchmark script, which could be refactored for better long-term maintainability.
There was a problem hiding this comment.
Consider extracting the quantization and transpose operations outside the if/elif block to reduce code duplication. This improves maintainability by adhering to the DRY principle.
a_fp8, scale_a_fp8 = sglang_scaled_fp8_quant(a, scale_a)
b_fp8, scale_b_fp8 = sglang_scaled_fp8_quant(b, scale_b)
b_fp8 = b_fp8.t()
sgl-kernel/csrc/gemm/math.hpp
Outdated
There was a problem hiding this comment.
|
Great Job. Thanks for porting the kernel updates. |
|
@hhzguo Can you help us lint the code? Reference: https://docs.sglang.ai/references/contribution_guide.html#code-formatting-with-pre-commit |
91e8e7f to
7ad1715
Compare
|
@hhzguo Hi, I saw you force pushed the code. The lint issue is resolved. Are there any other updates? |
|
Hi @HydraQYH. Have you got some time to review and have this merged? |
Will merge this PR after CI passed. |
Motivation
Port vLLM's
vllm-project/vllm#18778 (comment)
vllm-project/vllm#19566 (comment)
https://github.com/vllm-project/vllm/pull/20071/files
to SGLang to improve performance of GEMM scaled matrix multiplication on NVIDIA Blackwell GPUS.
Make sgl-kernel/benchmark/ bench_fp8_gemm.py work for the above changes.
Modifications
This PR introduces optimizations for FP8 GEMM:
Added launch configuration tuning:
Add extra configs for [1, 16], (16, 64], (64, 256], (256, inf]
to replace the original all-in-one config for better CUDA MM performance.
Accuracy Test
We verified correctness and measured performance using:
Test Env:
NVIDIA B200 (Blackwell GPU)
SGLang: v0.4.9.post6
sgl-kernel: built from source, 0.2.5 based
CUDA version: 12.8
Test script:
python sgl-kernel/tests/test_fp8_gemm.py
Benchmark & Profiling
Checklist