Skip to content

[sgl-kernel] Streamline kernel size report (Top 20 only) and clean up#15552

Merged
BBuf merged 1 commit intomainfrom
clean_analyze_whl_kernel_sizes
Dec 21, 2025
Merged

[sgl-kernel] Streamline kernel size report (Top 20 only) and clean up#15552
BBuf merged 1 commit intomainfrom
clean_analyze_whl_kernel_sizes

Conversation

@BBuf
Copy link
Collaborator

@BBuf BBuf commented Dec 21, 2025

Motivation

Follow #14544

Cleans up sgl-kernel/analyze_whl_kernel_sizes.py by removing noisy non-essential prints/comments and keeping the output focused on the text report, which summarizes only the Top 20 kernel name-prefix groups and Top 20 individual kernels (with an aggregated “Other” row for the rest).

In h200:

============================================================================================================================================
CUDA Kernel Size Analysis
============================================================================================================================================

Total kernels: 10272
Total size: 1631.91 MB (1,711,178,912 bytes)
Average kernel size: 162.68 KB

============================================================================================================================================
Kernel Groups (by name prefix) - Top 20
============================================================================================================================================
Rank   Kernel Prefix                                                                    Count    Total (MB)   %       
--------------------------------------------------------------------------------------------------------------------------------------------
1      void cutlass::device_kernel                                                      3648     651.90       39.95   
2      void marlin_moe_wna16::Marlin                                                    720      370.32       22.69   
3      void marlin::Marlin                                                              1080     292.63       17.93   
4      void fast_hadamard_transform_kernel                                              294      89.20        5.47    
5      void flash::flash_fwd_sparse_kernel                                              32       11.43        0.70    
6      void cutlass::Kernel2                                                            96       11.24        0.69    
7      void per_token_group_quant_8bit_kernel                                           196      8.82         0.54    
8      void flashinfer::sampling::TopKTopPSamplingFromProbKernel                        40       7.54         0.46    
9      void flashinfer::sampling::TopKSamplingFromProbKernel                            40       7.50         0.46    
10     void router_gemm_kernel_bf16_output                                              64       6.89         0.42    
11     void router_gemm_kernel_float_output                                             64       6.83         0.42    
12     void mscclpp::executionKernel                                                    30       5.48         0.34    
13     void moe_fused_gate_kernel                                                       24       3.51         0.22    
14     void flashinfer::sampling::OnlineSoftmaxFusedKernel                              40       3.44         0.21    
15     void topkGatingSigmoid                                                           54       3.43         0.21    
16     void flashinfer::sampling::ChainSpeculativeSampling                              20       3.33         0.20    
17     void flashinfer::sampling::TopPSamplingFromProbKernel                            20       3.28         0.20    
18     void flashinfer::norm::FusedAddRMSNormKernel                                     30       3.15         0.19    
19     void topkGatingSoftmax                                                           54       3.03         0.19    
20     void flashinfer::BatchQKApplyRotaryPosIdsCosSinCacheEnhancedKernel               96       2.92         0.18    
Other  (remaining 1312 kernel groups)                                                   3630     136.04       8.34    

============================================================================================================================================
Individual Kernels (sorted by size) - Top 20
============================================================================================================================================
Rank   File                                     Kernel Name                                                            Size (KB)    Size (MB)    %       
--------------------------------------------------------------------------------------------------------------------------------------------
1      common_ops.abi3.so                       void cutlass::device_kernel<cutlass::fmha::kernel::Sm100FmhaMlaK...    1475.00      1.4404       0.09    
2      common_ops.abi3.so                       void cutlass::device_kernel<cutlass::fmha::kernel::Sm100FmhaMlaK...    1434.25      1.4006       0.09    
3      common_ops.abi3.so                       void cutlass::device_kernel<cutlass::fmha::kernel::Sm100FmhaMlaK...    1400.88      1.3680       0.08    
4      common_ops.abi3.so                       void cutlass::device_kernel<cutlass::fmha::kernel::Sm100FmhaMlaK...    1362.38      1.3304       0.08    
5      common_ops.abi3.so                       void cutlass::device_kernel<cutlass::fmha::kernel::Sm100FmhaMlaK...    1267.00      1.2373       0.08    
6      common_ops.abi3.so                       void cutlass::device_kernel<cutlass::fmha::kernel::Sm100FmhaMlaK...    1257.00      1.2275       0.08    
7      common_ops.abi3.so                       void cutlass::device_kernel<cutlass::fmha::kernel::Sm100FmhaMlaK...    1191.88      1.1639       0.07    
8      common_ops.abi3.so                       void cutlass::device_kernel<cutlass::fmha::kernel::Sm100FmhaMlaK...    1183.00      1.1553       0.07    
9      common_ops.abi3.so                       void marlin_moe_wna16::Marlin<__nv_bfloat16, 1125899923621888l, ...    1029.50      1.0054       0.06    
10     common_ops.abi3.so                       void marlin_moe_wna16::Marlin<__nv_bfloat16, 1125899923621888l, ...    1029.50      1.0054       0.06    
11     common_ops.abi3.so                       void marlin_moe_wna16::Marlin<__nv_bfloat16, 1125899923621888l, ...    998.62       0.9752       0.06    
12     common_ops.abi3.so                       void marlin_moe_wna16::Marlin<__nv_bfloat16, 1125899923621888l, ...    998.62       0.9752       0.06    
13     common_ops.abi3.so                       void marlin_moe_wna16::Marlin<__half, 1125899923621888l, 1125899...    977.88       0.9550       0.06    
14     common_ops.abi3.so                       void marlin_moe_wna16::Marlin<__half, 1125899923621888l, 1125899...    977.88       0.9550       0.06    
15     common_ops.abi3.so                       void marlin_moe_wna16::Marlin<__nv_bfloat16, 1125899907892224l, ...    968.12       0.9454       0.06    
16     common_ops.abi3.so                       void marlin_moe_wna16::Marlin<__nv_bfloat16, 1125899907892224l, ...    968.12       0.9454       0.06    
17     common_ops.abi3.so                       void marlin_moe_wna16::Marlin<__half, 1125899907892224l, 1125899...    947.88       0.9257       0.06    
18     common_ops.abi3.so                       void marlin_moe_wna16::Marlin<__half, 1125899907892224l, 1125899...    947.88       0.9257       0.06    
19     common_ops.abi3.so                       void marlin_moe_wna16::Marlin<__half, 1125899923621888l, 1125899...    947.12       0.9249       0.06    
20     common_ops.abi3.so                       void marlin_moe_wna16::Marlin<__half, 1125899923621888l, 1125899...    947.12       0.9249       0.06    
Other  (remaining 10252 kernels)                                                                                       1648763.53   1610.1206    98.66   

Modifications

Accuracy Tests

Benchmarking and Profiling

Checklist

@github-actions github-actions bot added documentation Improvements or additions to documentation sgl-kernel labels Dec 21, 2025
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @BBuf, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the 'sgl-kernel's CUDA kernel size analysis script by significantly streamlining its output and reducing verbosity. The primary goal is to provide a more focused and actionable report, highlighting only the most significant kernel groups and individual kernels (Top 20 each) and aggregating the rest into an 'Other' category. This change aims to make it easier for developers to identify and address performance bottlenecks or code bloat related to kernel sizes.

Highlights

  • Streamlined Kernel Size Reporting: The analysis tool now focuses its output on the Top 20 kernel name-prefix groups and Top 20 individual kernels, improving clarity and relevance for identifying optimization opportunities.
  • Aggregated 'Other' Category: An 'Other' row has been added to both kernel group and individual kernel reports, summarizing the remaining kernels beyond the Top 20.
  • Reduced Verbosity: Non-essential print statements and comments have been removed from 'analyze_whl_kernel_sizes.py' for a cleaner execution output.
  • Simplified Output Format: The tool no longer generates a JSON file or timing information, focusing solely on the text report.
  • README Update: The 'README.md' has been updated to reflect the new output format and explicitly state the 'cubloaty' dependency.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request successfully streamlines the kernel size analysis script. The changes significantly clean up the output by removing verbose logging, timing information, and the JSON export, while focusing the text report on the Top 20 items for both kernel groups and individual kernels. This makes the report much more concise and useful. The accompanying README update is also clear and accurate. I have a couple of minor suggestions to further improve code readability by using f-strings consistently.

@BBuf BBuf merged commit 7fa4906 into main Dec 21, 2025
55 of 61 checks passed
@BBuf BBuf deleted the clean_analyze_whl_kernel_sizes branch December 21, 2025 02:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

documentation Improvements or additions to documentation sgl-kernel

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant